Nov 28 01:42:57 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023 Nov 28 01:42:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com. Nov 28 01:42:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Nov 28 01:42:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 28 01:42:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 28 01:42:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 28 01:42:57 localhost kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 28 01:42:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 28 01:42:57 localhost kernel: signal: max sigframe size: 1776 Nov 28 01:42:57 localhost kernel: BIOS-provided physical RAM map: Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 28 01:42:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable Nov 28 01:42:57 localhost kernel: NX (Execute Disable) protection: active Nov 28 01:42:57 localhost kernel: SMBIOS 2.8 present. Nov 28 01:42:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Nov 28 01:42:57 localhost kernel: Hypervisor detected: KVM Nov 28 01:42:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 28 01:42:57 localhost kernel: kvm-clock: using sched offset of 1784718827 cycles Nov 28 01:42:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 28 01:42:57 localhost kernel: tsc: Detected 2799.998 MHz processor Nov 28 01:42:57 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000 Nov 28 01:42:57 localhost kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 28 01:42:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000 Nov 28 01:42:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef] Nov 28 01:42:57 localhost kernel: Using GB pages for direct mapping Nov 28 01:42:57 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff] Nov 28 01:42:57 localhost kernel: ACPI: Early table checksum verification disabled Nov 28 01:42:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 28 01:42:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 28 01:42:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 28 01:42:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 28 01:42:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040 Nov 28 01:42:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 28 01:42:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 28 01:42:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4] Nov 28 01:42:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570] Nov 28 01:42:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f] Nov 28 01:42:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694] Nov 28 01:42:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc] Nov 28 01:42:57 localhost kernel: No NUMA configuration found Nov 28 01:42:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff] Nov 28 01:42:57 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff] Nov 28 01:42:57 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB) Nov 28 01:42:57 localhost kernel: Zone ranges: Nov 28 01:42:57 localhost kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 28 01:42:57 localhost kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 28 01:42:57 localhost kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Nov 28 01:42:57 localhost kernel: Device empty Nov 28 01:42:57 localhost kernel: Movable zone start for each node Nov 28 01:42:57 localhost kernel: Early memory node ranges Nov 28 01:42:57 localhost kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 28 01:42:57 localhost kernel: node 0: [mem 0x0000000000100000-0x00000000bffdafff] Nov 28 01:42:57 localhost kernel: node 0: [mem 0x0000000100000000-0x000000043fffffff] Nov 28 01:42:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff] Nov 28 01:42:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 28 01:42:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 28 01:42:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges Nov 28 01:42:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608 Nov 28 01:42:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 28 01:42:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 28 01:42:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 28 01:42:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 28 01:42:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 28 01:42:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 28 01:42:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 28 01:42:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 28 01:42:57 localhost kernel: TSC deadline timer available Nov 28 01:42:57 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff] Nov 28 01:42:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff] Nov 28 01:42:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices Nov 28 01:42:57 localhost kernel: Booting paravirtualized kernel on KVM Nov 28 01:42:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 28 01:42:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1 Nov 28 01:42:57 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144 Nov 28 01:42:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support Nov 28 01:42:57 localhost kernel: Fallback order for Node 0: 0 Nov 28 01:42:57 localhost kernel: Built 1 zonelists, mobility grouping on. Total pages: 4128475 Nov 28 01:42:57 localhost kernel: Policy zone: Normal Nov 28 01:42:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Nov 28 01:42:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space. Nov 28 01:42:57 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 28 01:42:57 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 28 01:42:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 28 01:42:57 localhost kernel: software IO TLB: area num 8. Nov 28 01:42:57 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved) Nov 28 01:42:57 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0 Nov 28 01:42:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1 Nov 28 01:42:57 localhost kernel: ftrace: allocating 44803 entries in 176 pages Nov 28 01:42:57 localhost kernel: ftrace: allocated 176 pages with 3 groups Nov 28 01:42:57 localhost kernel: Dynamic Preempt: voluntary Nov 28 01:42:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation. Nov 28 01:42:57 localhost kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8. Nov 28 01:42:57 localhost kernel: #011Trampoline variant of Tasks RCU enabled. Nov 28 01:42:57 localhost kernel: #011Rude variant of Tasks RCU enabled. Nov 28 01:42:57 localhost kernel: #011Tracing variant of Tasks RCU enabled. Nov 28 01:42:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 28 01:42:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8 Nov 28 01:42:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16 Nov 28 01:42:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 28 01:42:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____) Nov 28 01:42:57 localhost kernel: random: crng init done (trusting CPU's manufacturer) Nov 28 01:42:57 localhost kernel: Console: colour VGA+ 80x25 Nov 28 01:42:57 localhost kernel: printk: console [tty0] enabled Nov 28 01:42:57 localhost kernel: printk: console [ttyS0] enabled Nov 28 01:42:57 localhost kernel: ACPI: Core revision 20211217 Nov 28 01:42:57 localhost kernel: APIC: Switch to symmetric I/O mode setup Nov 28 01:42:57 localhost kernel: x2apic enabled Nov 28 01:42:57 localhost kernel: Switched APIC routing to physical x2apic. Nov 28 01:42:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 28 01:42:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Nov 28 01:42:57 localhost kernel: pid_max: default: 32768 minimum: 301 Nov 28 01:42:57 localhost kernel: LSM: Security Framework initializing Nov 28 01:42:57 localhost kernel: Yama: becoming mindful. Nov 28 01:42:57 localhost kernel: SELinux: Initializing. Nov 28 01:42:57 localhost kernel: LSM support for eBPF active Nov 28 01:42:57 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 28 01:42:57 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 28 01:42:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 28 01:42:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 28 01:42:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 28 01:42:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 28 01:42:57 localhost kernel: Spectre V2 : Mitigation: Retpolines Nov 28 01:42:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 28 01:42:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 28 01:42:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 28 01:42:57 localhost kernel: RETBleed: Mitigation: untrained return thunk Nov 28 01:42:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 28 01:42:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 28 01:42:57 localhost kernel: Freeing SMP alternatives memory: 36K Nov 28 01:42:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 28 01:42:57 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues. Nov 28 01:42:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Nov 28 01:42:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Nov 28 01:42:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1. Nov 28 01:42:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 28 01:42:57 localhost kernel: ... version: 0 Nov 28 01:42:57 localhost kernel: ... bit width: 48 Nov 28 01:42:57 localhost kernel: ... generic registers: 6 Nov 28 01:42:57 localhost kernel: ... value mask: 0000ffffffffffff Nov 28 01:42:57 localhost kernel: ... max period: 00007fffffffffff Nov 28 01:42:57 localhost kernel: ... fixed-purpose events: 0 Nov 28 01:42:57 localhost kernel: ... event mask: 000000000000003f Nov 28 01:42:57 localhost kernel: rcu: Hierarchical SRCU implementation. Nov 28 01:42:57 localhost kernel: rcu: #011Max phase no-delay instances is 400. Nov 28 01:42:57 localhost kernel: smp: Bringing up secondary CPUs ... Nov 28 01:42:57 localhost kernel: x86: Booting SMP configuration: Nov 28 01:42:57 localhost kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Nov 28 01:42:57 localhost kernel: smp: Brought up 1 node, 8 CPUs Nov 28 01:42:57 localhost kernel: smpboot: Max logical packages: 8 Nov 28 01:42:57 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS) Nov 28 01:42:57 localhost kernel: node 0 deferred pages initialised in 25ms Nov 28 01:42:57 localhost kernel: devtmpfs: initialized Nov 28 01:42:57 localhost kernel: x86/mm: Memory block size: 128MB Nov 28 01:42:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 28 01:42:57 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear) Nov 28 01:42:57 localhost kernel: pinctrl core: initialized pinctrl subsystem Nov 28 01:42:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 28 01:42:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Nov 28 01:42:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 28 01:42:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 28 01:42:57 localhost kernel: audit: initializing netlink subsys (disabled) Nov 28 01:42:57 localhost kernel: audit: type=2000 audit(1764312176.109:1): state=initialized audit_enabled=0 res=1 Nov 28 01:42:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share' Nov 28 01:42:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 28 01:42:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space' Nov 28 01:42:57 localhost kernel: cpuidle: using governor menu Nov 28 01:42:57 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB Nov 28 01:42:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 28 01:42:57 localhost kernel: PCI: Using configuration type 1 for base access Nov 28 01:42:57 localhost kernel: PCI: Using configuration type 1 for extended access Nov 28 01:42:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 28 01:42:57 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB Nov 28 01:42:57 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 28 01:42:57 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 28 01:42:57 localhost kernel: cryptd: max_cpu_qlen set to 1000 Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(Module Device) Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(Processor Device) Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 28 01:42:57 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 28 01:42:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 28 01:42:57 localhost kernel: ACPI: Interpreter enabled Nov 28 01:42:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5) Nov 28 01:42:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing Nov 28 01:42:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 28 01:42:57 localhost kernel: PCI: Using E820 reservations for host bridge windows Nov 28 01:42:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 28 01:42:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 28 01:42:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3] Nov 28 01:42:57 localhost kernel: acpiphp: Slot [3] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [4] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [5] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [6] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [7] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [8] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [9] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [10] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [11] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [12] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [13] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [14] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [15] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [16] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [17] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [18] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [19] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [20] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [21] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [22] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [23] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [24] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [25] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [26] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [27] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [28] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [29] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [30] registered Nov 28 01:42:57 localhost kernel: acpiphp: Slot [31] registered Nov 28 01:42:57 localhost kernel: PCI host bridge to bus 0000:00 Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 28 01:42:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 28 01:42:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 28 01:42:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 28 01:42:57 localhost kernel: pci 0000:00:01.1: reg 0x20: [io 0xc140-0xc14f] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 28 01:42:57 localhost kernel: pci 0000:00:01.2: reg 0x20: [io 0xc100-0xc11f] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 28 01:42:57 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 28 01:42:57 localhost kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 28 01:42:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 28 01:42:57 localhost kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Nov 28 01:42:57 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Nov 28 01:42:57 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 28 01:42:57 localhost kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 28 01:42:57 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Nov 28 01:42:57 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Nov 28 01:42:57 localhost kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Nov 28 01:42:57 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 28 01:42:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Nov 28 01:42:57 localhost kernel: pci 0000:00:06.0: reg 0x10: [io 0xc120-0xc13f] Nov 28 01:42:57 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 28 01:42:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 28 01:42:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 28 01:42:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 28 01:42:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 28 01:42:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 28 01:42:57 localhost kernel: iommu: Default domain type: Translated Nov 28 01:42:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 28 01:42:57 localhost kernel: SCSI subsystem initialized Nov 28 01:42:57 localhost kernel: ACPI: bus type USB registered Nov 28 01:42:57 localhost kernel: usbcore: registered new interface driver usbfs Nov 28 01:42:57 localhost kernel: usbcore: registered new interface driver hub Nov 28 01:42:57 localhost kernel: usbcore: registered new device driver usb Nov 28 01:42:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered Nov 28 01:42:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 28 01:42:57 localhost kernel: PTP clock support registered Nov 28 01:42:57 localhost kernel: EDAC MC: Ver: 3.0.0 Nov 28 01:42:57 localhost kernel: NetLabel: Initializing Nov 28 01:42:57 localhost kernel: NetLabel: domain hash size = 128 Nov 28 01:42:57 localhost kernel: NetLabel: protocols = UNLABELED CIPSOv4 CALIPSO Nov 28 01:42:57 localhost kernel: NetLabel: unlabeled traffic allowed by default Nov 28 01:42:57 localhost kernel: PCI: Using ACPI for IRQ routing Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 28 01:42:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 28 01:42:57 localhost kernel: vgaarb: loaded Nov 28 01:42:57 localhost kernel: clocksource: Switched to clocksource kvm-clock Nov 28 01:42:57 localhost kernel: VFS: Disk quotas dquot_6.6.0 Nov 28 01:42:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 28 01:42:57 localhost kernel: pnp: PnP ACPI init Nov 28 01:42:57 localhost kernel: pnp: PnP ACPI: found 5 devices Nov 28 01:42:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 28 01:42:57 localhost kernel: NET: Registered PF_INET protocol family Nov 28 01:42:57 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 28 01:42:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Nov 28 01:42:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 28 01:42:57 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 28 01:42:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 28 01:42:57 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536) Nov 28 01:42:57 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear) Nov 28 01:42:57 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Nov 28 01:42:57 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Nov 28 01:42:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 28 01:42:57 localhost kernel: NET: Registered PF_XDP protocol family Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window] Nov 28 01:42:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window] Nov 28 01:42:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 28 01:42:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 28 01:42:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 28 01:42:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 29825 usecs Nov 28 01:42:57 localhost kernel: PCI: CLS 0 bytes, default 64 Nov 28 01:42:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 28 01:42:57 localhost kernel: Trying to unpack rootfs image as initramfs... Nov 28 01:42:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB) Nov 28 01:42:57 localhost kernel: ACPI: bus type thunderbolt registered Nov 28 01:42:57 localhost kernel: Initialise system trusted keyrings Nov 28 01:42:57 localhost kernel: Key type blacklist registered Nov 28 01:42:57 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0 Nov 28 01:42:57 localhost kernel: zbud: loaded Nov 28 01:42:57 localhost kernel: integrity: Platform Keyring initialized Nov 28 01:42:57 localhost kernel: NET: Registered PF_ALG protocol family Nov 28 01:42:57 localhost kernel: xor: automatically using best checksumming function avx Nov 28 01:42:57 localhost kernel: Key type asymmetric registered Nov 28 01:42:57 localhost kernel: Asymmetric key parser 'x509' registered Nov 28 01:42:57 localhost kernel: Running certificate verification selftests Nov 28 01:42:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db' Nov 28 01:42:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246) Nov 28 01:42:57 localhost kernel: io scheduler mq-deadline registered Nov 28 01:42:57 localhost kernel: io scheduler kyber registered Nov 28 01:42:57 localhost kernel: io scheduler bfq registered Nov 28 01:42:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE Nov 28 01:42:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4 Nov 28 01:42:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 Nov 28 01:42:57 localhost kernel: ACPI: button: Power Button [PWRF] Nov 28 01:42:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 28 01:42:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 28 01:42:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 28 01:42:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 28 01:42:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 28 01:42:57 localhost kernel: Non-volatile memory driver v1.3 Nov 28 01:42:57 localhost kernel: rdac: device handler registered Nov 28 01:42:57 localhost kernel: hp_sw: device handler registered Nov 28 01:42:57 localhost kernel: emc: device handler registered Nov 28 01:42:57 localhost kernel: alua: device handler registered Nov 28 01:42:57 localhost kernel: libphy: Fixed MDIO Bus: probed Nov 28 01:42:57 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Nov 28 01:42:57 localhost kernel: ehci-pci: EHCI PCI platform driver Nov 28 01:42:57 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver Nov 28 01:42:57 localhost kernel: ohci-pci: OHCI PCI platform driver Nov 28 01:42:57 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver Nov 28 01:42:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 28 01:42:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 28 01:42:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 28 01:42:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100 Nov 28 01:42:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14 Nov 28 01:42:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Nov 28 01:42:57 localhost kernel: usb usb1: Product: UHCI Host Controller Nov 28 01:42:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd Nov 28 01:42:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2 Nov 28 01:42:57 localhost kernel: hub 1-0:1.0: USB hub found Nov 28 01:42:57 localhost kernel: hub 1-0:1.0: 2 ports detected Nov 28 01:42:57 localhost kernel: usbcore: registered new interface driver usbserial_generic Nov 28 01:42:57 localhost kernel: usbserial: USB Serial support registered for generic Nov 28 01:42:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 28 01:42:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 28 01:42:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 28 01:42:57 localhost kernel: mousedev: PS/2 mouse device common for all mice Nov 28 01:42:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 28 01:42:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 28 01:42:57 localhost kernel: rtc_cmos 00:04: registered as rtc0 Nov 28 01:42:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-28T06:42:56 UTC (1764312176) Nov 28 01:42:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 28 01:42:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4 Nov 28 01:42:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina Nov 28 01:42:57 localhost kernel: usbcore: registered new interface driver usbhid Nov 28 01:42:57 localhost kernel: usbhid: USB HID core driver Nov 28 01:42:57 localhost kernel: drop_monitor: Initializing network drop monitor service Nov 28 01:42:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3 Nov 28 01:42:57 localhost kernel: Initializing XFRM netlink socket Nov 28 01:42:57 localhost kernel: NET: Registered PF_INET6 protocol family Nov 28 01:42:57 localhost kernel: Segment Routing with IPv6 Nov 28 01:42:57 localhost kernel: NET: Registered PF_PACKET protocol family Nov 28 01:42:57 localhost kernel: mpls_gso: MPLS GSO support Nov 28 01:42:57 localhost kernel: IPI shorthand broadcast: enabled Nov 28 01:42:57 localhost kernel: AVX2 version of gcm_enc/dec engaged. Nov 28 01:42:57 localhost kernel: AES CTR mode by8 optimization enabled Nov 28 01:42:57 localhost kernel: sched_clock: Marking stable (758975655, 180051824)->(1069299812, -130272333) Nov 28 01:42:57 localhost kernel: registered taskstats version 1 Nov 28 01:42:57 localhost kernel: Loading compiled-in X.509 certificates Nov 28 01:42:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Nov 28 01:42:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80' Nov 28 01:42:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8' Nov 28 01:42:57 localhost kernel: zswap: loaded using pool lzo/zbud Nov 28 01:42:57 localhost kernel: page_owner is disabled Nov 28 01:42:57 localhost kernel: Key type big_key registered Nov 28 01:42:57 localhost kernel: Freeing initrd memory: 74232K Nov 28 01:42:57 localhost kernel: Key type encrypted registered Nov 28 01:42:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass! Nov 28 01:42:57 localhost kernel: Loading compiled-in module X.509 certificates Nov 28 01:42:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72' Nov 28 01:42:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd Nov 28 01:42:57 localhost kernel: ima: Allocated hash algorithm: sha256 Nov 28 01:42:57 localhost kernel: ima: No architecture policies found Nov 28 01:42:57 localhost kernel: evm: Initialising EVM extended attributes: Nov 28 01:42:57 localhost kernel: evm: security.selinux Nov 28 01:42:57 localhost kernel: evm: security.SMACK64 (disabled) Nov 28 01:42:57 localhost kernel: evm: security.SMACK64EXEC (disabled) Nov 28 01:42:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled) Nov 28 01:42:57 localhost kernel: evm: security.SMACK64MMAP (disabled) Nov 28 01:42:57 localhost kernel: evm: security.apparmor (disabled) Nov 28 01:42:57 localhost kernel: evm: security.ima Nov 28 01:42:57 localhost kernel: evm: security.capability Nov 28 01:42:57 localhost kernel: evm: HMAC attrs: 0x1 Nov 28 01:42:57 localhost kernel: Freeing unused decrypted memory: 2036K Nov 28 01:42:57 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K Nov 28 01:42:57 localhost kernel: Write protecting the kernel read-only data: 26624k Nov 28 01:42:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00 Nov 28 01:42:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10 Nov 28 01:42:57 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 28 01:42:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet Nov 28 01:42:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K Nov 28 01:42:57 localhost kernel: usb 1-1: Manufacturer: QEMU Nov 28 01:42:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1 Nov 28 01:42:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5 Nov 28 01:42:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0 Nov 28 01:42:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found. Nov 28 01:42:57 localhost kernel: Run /init as init process Nov 28 01:42:57 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 28 01:42:57 localhost systemd[1]: Detected virtualization kvm. Nov 28 01:42:57 localhost systemd[1]: Detected architecture x86-64. Nov 28 01:42:57 localhost systemd[1]: Running in initrd. Nov 28 01:42:57 localhost systemd[1]: No hostname configured, using default hostname. Nov 28 01:42:57 localhost systemd[1]: Hostname set to . Nov 28 01:42:57 localhost systemd[1]: Initializing machine ID from VM UUID. Nov 28 01:42:57 localhost systemd[1]: Queued start job for default target Initrd Default Target. Nov 28 01:42:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Nov 28 01:42:57 localhost systemd[1]: Reached target Local Encrypted Volumes. Nov 28 01:42:57 localhost systemd[1]: Reached target Initrd /usr File System. Nov 28 01:42:57 localhost systemd[1]: Reached target Local File Systems. Nov 28 01:42:57 localhost systemd[1]: Reached target Path Units. Nov 28 01:42:57 localhost systemd[1]: Reached target Slice Units. Nov 28 01:42:57 localhost systemd[1]: Reached target Swaps. Nov 28 01:42:57 localhost systemd[1]: Reached target Timer Units. Nov 28 01:42:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Nov 28 01:42:57 localhost systemd[1]: Listening on Journal Socket (/dev/log). Nov 28 01:42:57 localhost systemd[1]: Listening on Journal Socket. Nov 28 01:42:57 localhost systemd[1]: Listening on udev Control Socket. Nov 28 01:42:57 localhost systemd[1]: Listening on udev Kernel Socket. Nov 28 01:42:57 localhost systemd[1]: Reached target Socket Units. Nov 28 01:42:57 localhost systemd[1]: Starting Create List of Static Device Nodes... Nov 28 01:42:57 localhost systemd[1]: Starting Journal Service... Nov 28 01:42:57 localhost systemd[1]: Starting Load Kernel Modules... Nov 28 01:42:57 localhost systemd[1]: Starting Create System Users... Nov 28 01:42:57 localhost systemd[1]: Starting Setup Virtual Console... Nov 28 01:42:57 localhost systemd[1]: Finished Create List of Static Device Nodes. Nov 28 01:42:57 localhost systemd[1]: Finished Load Kernel Modules. Nov 28 01:42:57 localhost systemd-journald[284]: Journal started Nov 28 01:42:57 localhost systemd-journald[284]: Runtime Journal (/run/log/journal/4c358f0e7e1544e5bde2714780d05a92) is 8.0M, max 314.7M, 306.7M free. Nov 28 01:42:57 localhost systemd-modules-load[285]: Module 'msr' is built in Nov 28 01:42:57 localhost systemd[1]: Started Journal Service. Nov 28 01:42:57 localhost systemd[1]: Finished Setup Virtual Console. Nov 28 01:42:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met. Nov 28 01:42:57 localhost systemd[1]: Starting dracut cmdline hook... Nov 28 01:42:57 localhost systemd[1]: Starting Apply Kernel Variables... Nov 28 01:42:57 localhost systemd-sysusers[286]: Creating group 'sgx' with GID 997. Nov 28 01:42:57 localhost systemd-sysusers[286]: Creating group 'users' with GID 100. Nov 28 01:42:57 localhost systemd-sysusers[286]: Creating group 'dbus' with GID 81. Nov 28 01:42:57 localhost systemd-sysusers[286]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81. Nov 28 01:42:57 localhost systemd[1]: Finished Create System Users. Nov 28 01:42:57 localhost systemd[1]: Finished Apply Kernel Variables. Nov 28 01:42:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Nov 28 01:42:57 localhost systemd[1]: Starting Create Volatile Files and Directories... Nov 28 01:42:57 localhost dracut-cmdline[291]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9 Nov 28 01:42:57 localhost dracut-cmdline[291]: Using kernel command line parameters: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M Nov 28 01:42:57 localhost systemd[1]: Finished Create Volatile Files and Directories. Nov 28 01:42:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Nov 28 01:42:57 localhost systemd[1]: Finished dracut cmdline hook. Nov 28 01:42:57 localhost systemd[1]: Starting dracut pre-udev hook... Nov 28 01:42:57 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 28 01:42:57 localhost kernel: device-mapper: uevent: version 1.0.3 Nov 28 01:42:57 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Nov 28 01:42:57 localhost kernel: RPC: Registered named UNIX socket transport module. Nov 28 01:42:57 localhost kernel: RPC: Registered udp transport module. Nov 28 01:42:57 localhost kernel: RPC: Registered tcp transport module. Nov 28 01:42:57 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Nov 28 01:42:57 localhost rpc.statd[410]: Version 2.5.4 starting Nov 28 01:42:57 localhost rpc.statd[410]: Initializing NSM state Nov 28 01:42:57 localhost rpc.idmapd[415]: Setting log level to 0 Nov 28 01:42:58 localhost systemd[1]: Finished dracut pre-udev hook. Nov 28 01:42:58 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Nov 28 01:42:58 localhost systemd-udevd[428]: Using default interface naming scheme 'rhel-9.0'. Nov 28 01:42:58 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Nov 28 01:42:58 localhost systemd[1]: Starting dracut pre-trigger hook... Nov 28 01:42:58 localhost systemd[1]: Finished dracut pre-trigger hook. Nov 28 01:42:58 localhost systemd[1]: Starting Coldplug All udev Devices... Nov 28 01:42:58 localhost systemd[1]: Finished Coldplug All udev Devices. Nov 28 01:42:58 localhost systemd[1]: Reached target System Initialization. Nov 28 01:42:58 localhost systemd[1]: Reached target Basic System. Nov 28 01:42:58 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Nov 28 01:42:58 localhost systemd[1]: Reached target Network. Nov 28 01:42:58 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet). Nov 28 01:42:58 localhost systemd[1]: Starting dracut initqueue hook... Nov 28 01:42:58 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB) Nov 28 01:42:58 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 28 01:42:58 localhost kernel: GPT:20971519 != 838860799 Nov 28 01:42:58 localhost kernel: GPT:Alternate GPT header not at the end of the disk. Nov 28 01:42:58 localhost kernel: GPT:20971519 != 838860799 Nov 28 01:42:58 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Nov 28 01:42:58 localhost kernel: vda: vda1 vda2 vda3 vda4 Nov 28 01:42:58 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Nov 28 01:42:58 localhost systemd-udevd[460]: Network interface NamePolicy= disabled on kernel command line. Nov 28 01:42:58 localhost kernel: scsi host0: ata_piix Nov 28 01:42:58 localhost kernel: scsi host1: ata_piix Nov 28 01:42:58 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 Nov 28 01:42:58 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 Nov 28 01:42:58 localhost systemd[1]: Reached target Initrd Root Device. Nov 28 01:42:58 localhost kernel: ata1: found unknown device (class 0) Nov 28 01:42:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 28 01:42:58 localhost kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 28 01:42:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5 Nov 28 01:42:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 28 01:42:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 28 01:42:58 localhost systemd[1]: Finished dracut initqueue hook. Nov 28 01:42:58 localhost systemd[1]: Reached target Preparation for Remote File Systems. Nov 28 01:42:58 localhost systemd[1]: Reached target Remote Encrypted Volumes. Nov 28 01:42:58 localhost systemd[1]: Reached target Remote File Systems. Nov 28 01:42:58 localhost systemd[1]: Starting dracut pre-mount hook... Nov 28 01:42:58 localhost systemd[1]: Finished dracut pre-mount hook. Nov 28 01:42:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a... Nov 28 01:42:58 localhost systemd-fsck[512]: /usr/sbin/fsck.xfs: XFS file system. Nov 28 01:42:58 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a. Nov 28 01:42:58 localhost systemd[1]: Mounting /sysroot... Nov 28 01:42:58 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled Nov 28 01:42:58 localhost kernel: XFS (vda4): Mounting V5 Filesystem Nov 28 01:42:58 localhost kernel: XFS (vda4): Ending clean mount Nov 28 01:42:58 localhost systemd[1]: Mounted /sysroot. Nov 28 01:42:58 localhost systemd[1]: Reached target Initrd Root File System. Nov 28 01:42:58 localhost systemd[1]: Starting Mountpoints Configured in the Real Root... Nov 28 01:42:58 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 28 01:42:58 localhost systemd[1]: Finished Mountpoints Configured in the Real Root. Nov 28 01:42:58 localhost systemd[1]: Reached target Initrd File Systems. Nov 28 01:42:58 localhost systemd[1]: Reached target Initrd Default Target. Nov 28 01:42:58 localhost systemd[1]: Starting dracut mount hook... Nov 28 01:42:58 localhost systemd[1]: Finished dracut mount hook. Nov 28 01:42:58 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook... Nov 28 01:42:59 localhost rpc.idmapd[415]: exiting on signal 15 Nov 28 01:42:59 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook. Nov 28 01:42:59 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons... Nov 28 01:42:59 localhost systemd[1]: Stopped target Network. Nov 28 01:42:59 localhost systemd[1]: Stopped target Remote Encrypted Volumes. Nov 28 01:42:59 localhost systemd[1]: Stopped target Timer Units. Nov 28 01:42:59 localhost systemd[1]: dbus.socket: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Closed D-Bus System Message Bus Socket. Nov 28 01:42:59 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook. Nov 28 01:42:59 localhost systemd[1]: Stopped target Initrd Default Target. Nov 28 01:42:59 localhost systemd[1]: Stopped target Basic System. Nov 28 01:42:59 localhost systemd[1]: Stopped target Initrd Root Device. Nov 28 01:42:59 localhost systemd[1]: Stopped target Initrd /usr File System. Nov 28 01:42:59 localhost systemd[1]: Stopped target Path Units. Nov 28 01:42:59 localhost systemd[1]: Stopped target Remote File Systems. Nov 28 01:42:59 localhost systemd[1]: Stopped target Preparation for Remote File Systems. Nov 28 01:42:59 localhost systemd[1]: Stopped target Slice Units. Nov 28 01:42:59 localhost systemd[1]: Stopped target Socket Units. Nov 28 01:42:59 localhost systemd[1]: Stopped target System Initialization. Nov 28 01:42:59 localhost systemd[1]: Stopped target Local File Systems. Nov 28 01:42:59 localhost systemd[1]: Stopped target Swaps. Nov 28 01:42:59 localhost systemd[1]: dracut-mount.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut mount hook. Nov 28 01:42:59 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut pre-mount hook. Nov 28 01:42:59 localhost systemd[1]: Stopped target Local Encrypted Volumes. Nov 28 01:42:59 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch. Nov 28 01:42:59 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut initqueue hook. Nov 28 01:42:59 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Apply Kernel Variables. Nov 28 01:42:59 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Load Kernel Modules. Nov 28 01:42:59 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Create Volatile Files and Directories. Nov 28 01:42:59 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Coldplug All udev Devices. Nov 28 01:42:59 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut pre-trigger hook. Nov 28 01:42:59 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Nov 28 01:42:59 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Setup Virtual Console. Nov 28 01:42:59 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Nov 28 01:42:59 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons. Nov 28 01:42:59 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Closed udev Control Socket. Nov 28 01:42:59 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Closed udev Kernel Socket. Nov 28 01:42:59 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut pre-udev hook. Nov 28 01:42:59 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped dracut cmdline hook. Nov 28 01:42:59 localhost systemd[1]: Starting Cleanup udev Database... Nov 28 01:42:59 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Create Static Device Nodes in /dev. Nov 28 01:42:59 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Create List of Static Device Nodes. Nov 28 01:42:59 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Create System Users. Nov 28 01:42:59 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Finished Cleanup udev Database. Nov 28 01:42:59 localhost systemd[1]: Reached target Switch Root. Nov 28 01:42:59 localhost systemd[1]: Starting Switch Root... Nov 28 01:42:59 localhost systemd[1]: Switching root. Nov 28 01:42:59 localhost systemd-journald[284]: Journal stopped Nov 28 01:42:59 localhost systemd-journald[284]: Received SIGTERM from PID 1 (systemd). Nov 28 01:42:59 localhost kernel: audit: type=1404 audit(1764312179.318:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1 Nov 28 01:42:59 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 01:42:59 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 01:42:59 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 01:42:59 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 01:42:59 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 01:42:59 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 01:42:59 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 01:42:59 localhost kernel: audit: type=1403 audit(1764312179.410:3): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 28 01:42:59 localhost systemd[1]: Successfully loaded SELinux policy in 95.699ms. Nov 28 01:42:59 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.951ms. Nov 28 01:42:59 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 28 01:42:59 localhost systemd[1]: Detected virtualization kvm. Nov 28 01:42:59 localhost systemd[1]: Detected architecture x86-64. Nov 28 01:42:59 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 01:42:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 01:42:59 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped Switch Root. Nov 28 01:42:59 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 28 01:42:59 localhost systemd[1]: Created slice Slice /system/getty. Nov 28 01:42:59 localhost systemd[1]: Created slice Slice /system/modprobe. Nov 28 01:42:59 localhost systemd[1]: Created slice Slice /system/serial-getty. Nov 28 01:42:59 localhost systemd[1]: Created slice Slice /system/sshd-keygen. Nov 28 01:42:59 localhost systemd[1]: Created slice Slice /system/systemd-fsck. Nov 28 01:42:59 localhost systemd[1]: Created slice User and Session Slice. Nov 28 01:42:59 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch. Nov 28 01:42:59 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch. Nov 28 01:42:59 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point. Nov 28 01:42:59 localhost systemd[1]: Reached target Local Encrypted Volumes. Nov 28 01:42:59 localhost systemd[1]: Stopped target Switch Root. Nov 28 01:42:59 localhost systemd[1]: Stopped target Initrd File Systems. Nov 28 01:42:59 localhost systemd[1]: Stopped target Initrd Root File System. Nov 28 01:42:59 localhost systemd[1]: Reached target Local Integrity Protected Volumes. Nov 28 01:42:59 localhost systemd[1]: Reached target Path Units. Nov 28 01:42:59 localhost systemd[1]: Reached target rpc_pipefs.target. Nov 28 01:42:59 localhost systemd[1]: Reached target Slice Units. Nov 28 01:42:59 localhost systemd[1]: Reached target Swaps. Nov 28 01:42:59 localhost systemd[1]: Reached target Local Verity Protected Volumes. Nov 28 01:42:59 localhost systemd[1]: Listening on RPCbind Server Activation Socket. Nov 28 01:42:59 localhost systemd[1]: Reached target RPC Port Mapper. Nov 28 01:42:59 localhost systemd[1]: Listening on Process Core Dump Socket. Nov 28 01:42:59 localhost systemd[1]: Listening on initctl Compatibility Named Pipe. Nov 28 01:42:59 localhost systemd[1]: Listening on udev Control Socket. Nov 28 01:42:59 localhost systemd[1]: Listening on udev Kernel Socket. Nov 28 01:42:59 localhost systemd[1]: Mounting Huge Pages File System... Nov 28 01:42:59 localhost systemd[1]: Mounting POSIX Message Queue File System... Nov 28 01:42:59 localhost systemd[1]: Mounting Kernel Debug File System... Nov 28 01:42:59 localhost systemd[1]: Mounting Kernel Trace File System... Nov 28 01:42:59 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Nov 28 01:42:59 localhost systemd[1]: Starting Create List of Static Device Nodes... Nov 28 01:42:59 localhost systemd[1]: Starting Load Kernel Module configfs... Nov 28 01:42:59 localhost systemd[1]: Starting Load Kernel Module drm... Nov 28 01:42:59 localhost systemd[1]: Starting Load Kernel Module fuse... Nov 28 01:42:59 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network... Nov 28 01:42:59 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Stopped File System Check on Root Device. Nov 28 01:42:59 localhost systemd[1]: Stopped Journal Service. Nov 28 01:42:59 localhost systemd[1]: Starting Journal Service... Nov 28 01:42:59 localhost systemd[1]: Starting Load Kernel Modules... Nov 28 01:42:59 localhost systemd[1]: Starting Generate network units from Kernel command line... Nov 28 01:42:59 localhost kernel: fuse: init (API version 7.36) Nov 28 01:42:59 localhost systemd[1]: Starting Remount Root and Kernel File Systems... Nov 28 01:42:59 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met. Nov 28 01:42:59 localhost systemd[1]: Starting Coldplug All udev Devices... Nov 28 01:42:59 localhost systemd-journald[618]: Journal started Nov 28 01:42:59 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/5cd59ba25ae47acac865224fa46a5f9e) is 8.0M, max 314.7M, 306.7M free. Nov 28 01:42:59 localhost systemd[1]: Queued start job for default target Multi-User System. Nov 28 01:42:59 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd-modules-load[619]: Module 'msr' is built in Nov 28 01:42:59 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff) Nov 28 01:42:59 localhost systemd[1]: Started Journal Service. Nov 28 01:42:59 localhost systemd[1]: Mounted Huge Pages File System. Nov 28 01:42:59 localhost systemd[1]: Mounted POSIX Message Queue File System. Nov 28 01:42:59 localhost systemd[1]: Mounted Kernel Debug File System. Nov 28 01:42:59 localhost systemd[1]: Mounted Kernel Trace File System. Nov 28 01:42:59 localhost systemd[1]: Finished Create List of Static Device Nodes. Nov 28 01:42:59 localhost kernel: ACPI: bus type drm_connector registered Nov 28 01:42:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 28 01:42:59 localhost systemd[1]: Finished Load Kernel Module configfs. Nov 28 01:43:00 localhost systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 28 01:43:00 localhost systemd[1]: Finished Load Kernel Module drm. Nov 28 01:43:00 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 28 01:43:00 localhost systemd[1]: Finished Load Kernel Module fuse. Nov 28 01:43:00 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network. Nov 28 01:43:00 localhost systemd[1]: Finished Load Kernel Modules. Nov 28 01:43:00 localhost systemd[1]: Finished Generate network units from Kernel command line. Nov 28 01:43:00 localhost systemd[1]: Finished Remount Root and Kernel File Systems. Nov 28 01:43:00 localhost systemd[1]: Mounting FUSE Control File System... Nov 28 01:43:00 localhost systemd[1]: Mounting Kernel Configuration File System... Nov 28 01:43:00 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes). Nov 28 01:43:00 localhost systemd[1]: Starting Rebuild Hardware Database... Nov 28 01:43:00 localhost systemd[1]: Starting Flush Journal to Persistent Storage... Nov 28 01:43:00 localhost systemd[1]: Starting Load/Save Random Seed... Nov 28 01:43:00 localhost systemd[1]: Starting Apply Kernel Variables... Nov 28 01:43:00 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/5cd59ba25ae47acac865224fa46a5f9e) is 8.0M, max 314.7M, 306.7M free. Nov 28 01:43:00 localhost systemd-journald[618]: Received client request to flush runtime journal. Nov 28 01:43:00 localhost systemd[1]: Starting Create System Users... Nov 28 01:43:00 localhost systemd[1]: Finished Coldplug All udev Devices. Nov 28 01:43:00 localhost systemd[1]: Mounted FUSE Control File System. Nov 28 01:43:00 localhost systemd[1]: Mounted Kernel Configuration File System. Nov 28 01:43:00 localhost systemd[1]: Finished Flush Journal to Persistent Storage. Nov 28 01:43:00 localhost systemd[1]: Finished Load/Save Random Seed. Nov 28 01:43:00 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes). Nov 28 01:43:00 localhost systemd-sysusers[630]: Creating group 'sgx' with GID 989. Nov 28 01:43:00 localhost systemd-sysusers[630]: Creating group 'systemd-oom' with GID 988. Nov 28 01:43:00 localhost systemd-sysusers[630]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988. Nov 28 01:43:00 localhost systemd[1]: Finished Apply Kernel Variables. Nov 28 01:43:00 localhost systemd[1]: Finished Create System Users. Nov 28 01:43:00 localhost systemd[1]: Starting Create Static Device Nodes in /dev... Nov 28 01:43:00 localhost systemd[1]: Finished Create Static Device Nodes in /dev. Nov 28 01:43:00 localhost systemd[1]: Reached target Preparation for Local File Systems. Nov 28 01:43:00 localhost systemd[1]: Set up automount EFI System Partition Automount. Nov 28 01:43:00 localhost systemd[1]: Finished Rebuild Hardware Database. Nov 28 01:43:00 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Nov 28 01:43:00 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'. Nov 28 01:43:00 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Nov 28 01:43:00 localhost systemd[1]: Starting Load Kernel Module configfs... Nov 28 01:43:00 localhost systemd-udevd[637]: Network interface NamePolicy= disabled on kernel command line. Nov 28 01:43:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 28 01:43:00 localhost systemd[1]: Finished Load Kernel Module configfs. Nov 28 01:43:00 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped. Nov 28 01:43:00 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped. Nov 28 01:43:00 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7... Nov 28 01:43:00 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped. Nov 28 01:43:00 localhost systemd[1]: Mounting /boot... Nov 28 01:43:00 localhost systemd-fsck[679]: fsck.fat 4.2 (2021-01-31) Nov 28 01:43:00 localhost systemd-fsck[679]: /dev/vda2: 12 files, 1782/51145 clusters Nov 28 01:43:00 localhost kernel: XFS (vda3): Mounting V5 Filesystem Nov 28 01:43:00 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7. Nov 28 01:43:00 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 28 01:43:00 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6 Nov 28 01:43:00 localhost kernel: XFS (vda3): Ending clean mount Nov 28 01:43:00 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff) Nov 28 01:43:00 localhost systemd[1]: Mounted /boot. Nov 28 01:43:00 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 28 01:43:00 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 28 01:43:00 localhost kernel: Console: switching to colour dummy device 80x25 Nov 28 01:43:00 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 28 01:43:00 localhost kernel: [drm] features: -context_init Nov 28 01:43:00 localhost kernel: [drm] number of scanouts: 1 Nov 28 01:43:00 localhost kernel: [drm] number of cap sets: 0 Nov 28 01:43:00 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0 Nov 28 01:43:00 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called Nov 28 01:43:00 localhost kernel: Console: switching to colour frame buffer device 128x48 Nov 28 01:43:00 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 28 01:43:00 localhost kernel: SVM: TSC scaling supported Nov 28 01:43:00 localhost kernel: kvm: Nested Virtualization enabled Nov 28 01:43:00 localhost kernel: SVM: kvm: Nested Paging enabled Nov 28 01:43:00 localhost kernel: SVM: LBR virtualization supported Nov 28 01:43:00 localhost systemd[1]: Mounting /boot/efi... Nov 28 01:43:01 localhost systemd[1]: Mounted /boot/efi. Nov 28 01:43:01 localhost systemd[1]: Reached target Local File Systems. Nov 28 01:43:01 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache... Nov 28 01:43:01 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux). Nov 28 01:43:01 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 28 01:43:01 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 28 01:43:01 localhost systemd[1]: Starting Automatic Boot Loader Update... Nov 28 01:43:01 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id). Nov 28 01:43:01 localhost systemd[1]: Starting Create Volatile Files and Directories... Nov 28 01:43:01 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 702 (bootctl) Nov 28 01:43:01 localhost systemd[1]: Starting File System Check on /dev/vda2... Nov 28 01:43:01 localhost systemd[1]: Finished File System Check on /dev/vda2. Nov 28 01:43:01 localhost systemd[1]: Mounting EFI System Partition Automount... Nov 28 01:43:01 localhost systemd[1]: Mounted EFI System Partition Automount. Nov 28 01:43:01 localhost systemd[1]: Finished Automatic Boot Loader Update. Nov 28 01:43:01 localhost systemd[1]: Finished Create Volatile Files and Directories. Nov 28 01:43:01 localhost systemd[1]: Starting Security Auditing Service... Nov 28 01:43:01 localhost systemd[1]: Starting RPC Bind... Nov 28 01:43:01 localhost systemd[1]: Starting Rebuild Journal Catalog... Nov 28 01:43:01 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache. Nov 28 01:43:01 localhost auditd[719]: audit dispatcher initialized with q_depth=1200 and 1 active plugins Nov 28 01:43:01 localhost auditd[719]: Init complete, auditd 3.0.7 listening for events (startup state enable) Nov 28 01:43:01 localhost systemd[1]: Started RPC Bind. Nov 28 01:43:01 localhost systemd[1]: Finished Rebuild Journal Catalog. Nov 28 01:43:01 localhost systemd[1]: Starting Update is Completed... Nov 28 01:43:01 localhost systemd[1]: Finished Update is Completed. Nov 28 01:43:01 localhost augenrules[725]: /sbin/augenrules: No change Nov 28 01:43:01 localhost augenrules[739]: No rules Nov 28 01:43:01 localhost augenrules[739]: enabled 1 Nov 28 01:43:01 localhost augenrules[739]: failure 1 Nov 28 01:43:01 localhost augenrules[739]: pid 719 Nov 28 01:43:01 localhost augenrules[739]: rate_limit 0 Nov 28 01:43:01 localhost augenrules[739]: backlog_limit 8192 Nov 28 01:43:01 localhost augenrules[739]: lost 0 Nov 28 01:43:01 localhost augenrules[739]: backlog 0 Nov 28 01:43:01 localhost augenrules[739]: backlog_wait_time 60000 Nov 28 01:43:01 localhost augenrules[739]: backlog_wait_time_actual 0 Nov 28 01:43:01 localhost augenrules[739]: enabled 1 Nov 28 01:43:01 localhost augenrules[739]: failure 1 Nov 28 01:43:01 localhost augenrules[739]: pid 719 Nov 28 01:43:01 localhost augenrules[739]: rate_limit 0 Nov 28 01:43:01 localhost augenrules[739]: backlog_limit 8192 Nov 28 01:43:01 localhost augenrules[739]: lost 0 Nov 28 01:43:01 localhost augenrules[739]: backlog 0 Nov 28 01:43:01 localhost augenrules[739]: backlog_wait_time 60000 Nov 28 01:43:01 localhost augenrules[739]: backlog_wait_time_actual 0 Nov 28 01:43:01 localhost augenrules[739]: enabled 1 Nov 28 01:43:01 localhost augenrules[739]: failure 1 Nov 28 01:43:01 localhost augenrules[739]: pid 719 Nov 28 01:43:01 localhost augenrules[739]: rate_limit 0 Nov 28 01:43:01 localhost augenrules[739]: backlog_limit 8192 Nov 28 01:43:01 localhost augenrules[739]: lost 0 Nov 28 01:43:01 localhost augenrules[739]: backlog 0 Nov 28 01:43:01 localhost augenrules[739]: backlog_wait_time 60000 Nov 28 01:43:01 localhost augenrules[739]: backlog_wait_time_actual 0 Nov 28 01:43:01 localhost systemd[1]: Started Security Auditing Service. Nov 28 01:43:01 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP... Nov 28 01:43:01 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP. Nov 28 01:43:01 localhost systemd[1]: Reached target System Initialization. Nov 28 01:43:01 localhost systemd[1]: Started dnf makecache --timer. Nov 28 01:43:01 localhost systemd[1]: Started Daily rotation of log files. Nov 28 01:43:01 localhost systemd[1]: Started Daily Cleanup of Temporary Directories. Nov 28 01:43:01 localhost systemd[1]: Reached target Timer Units. Nov 28 01:43:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket. Nov 28 01:43:01 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket. Nov 28 01:43:01 localhost systemd[1]: Reached target Socket Units. Nov 28 01:43:01 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)... Nov 28 01:43:01 localhost systemd[1]: Starting D-Bus System Message Bus... Nov 28 01:43:01 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 28 01:43:01 localhost systemd[1]: Started D-Bus System Message Bus. Nov 28 01:43:01 localhost systemd[1]: Reached target Basic System. Nov 28 01:43:01 localhost systemd[1]: Starting NTP client/server... Nov 28 01:43:01 localhost journal[750]: Ready Nov 28 01:43:01 localhost systemd[1]: Starting Restore /run/initramfs on shutdown... Nov 28 01:43:01 localhost systemd[1]: Started irqbalance daemon. Nov 28 01:43:01 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload). Nov 28 01:43:01 localhost systemd[1]: Starting System Logging Service... Nov 28 01:43:01 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 01:43:01 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 01:43:01 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 01:43:01 localhost systemd[1]: Reached target sshd-keygen.target. Nov 28 01:43:01 localhost rsyslogd[758]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="758" x-info="https://www.rsyslog.com"] start Nov 28 01:43:01 localhost rsyslogd[758]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ] Nov 28 01:43:01 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met. Nov 28 01:43:01 localhost systemd[1]: Reached target User and Group Name Lookups. Nov 28 01:43:01 localhost systemd[1]: Starting User Login Management... Nov 28 01:43:01 localhost systemd[1]: Started System Logging Service. Nov 28 01:43:01 localhost systemd[1]: Finished Restore /run/initramfs on shutdown. Nov 28 01:43:01 localhost chronyd[765]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 28 01:43:01 localhost chronyd[765]: Using right/UTC timezone to obtain leap second data Nov 28 01:43:01 localhost chronyd[765]: Loaded seccomp filter (level 2) Nov 28 01:43:01 localhost systemd[1]: Started NTP client/server. Nov 28 01:43:01 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 01:43:01 localhost systemd-logind[763]: New seat seat0. Nov 28 01:43:01 localhost systemd-logind[763]: Watching system buttons on /dev/input/event0 (Power Button) Nov 28 01:43:01 localhost systemd-logind[763]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 28 01:43:01 localhost systemd[1]: Started User Login Management. Nov 28 01:43:01 localhost cloud-init[769]: Cloud-init v. 22.1-9.el9 running 'init-local' at Fri, 28 Nov 2025 06:43:01 +0000. Up 5.72 seconds. Nov 28 01:43:01 localhost systemd[1]: Starting Hostname Service... Nov 28 01:43:01 localhost systemd[1]: Started Hostname Service. Nov 28 01:43:01 localhost systemd-hostnamed[783]: Hostname set to (static) Nov 28 01:43:01 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpuu9n5t76.mount: Deactivated successfully. Nov 28 01:43:01 localhost systemd[1]: Finished Initial cloud-init job (pre-networking). Nov 28 01:43:01 localhost systemd[1]: Reached target Preparation for Network. Nov 28 01:43:01 localhost systemd[1]: Starting Network Manager... Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0092] NetworkManager (version 1.42.2-1.el9) is starting... (boot:a439187c-a774-4883-a00c-1a7b4e2aa22a) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0098] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Nov 28 01:43:02 localhost systemd[1]: Started Network Manager. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0122] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Nov 28 01:43:02 localhost systemd[1]: Reached target Network. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0187] manager[0x563361390020]: monitoring kernel firmware directory '/lib/firmware'. Nov 28 01:43:02 localhost systemd[1]: Starting Network Manager Wait Online... Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0233] hostname: hostname: using hostnamed Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0233] hostname: static hostname changed from (none) to "np0005538515.novalocal" Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0237] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Nov 28 01:43:02 localhost systemd[1]: Starting GSSAPI Proxy Daemon... Nov 28 01:43:02 localhost systemd[1]: Starting Enable periodic update of entitlement certificates.... Nov 28 01:43:02 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0371] manager[0x563361390020]: rfkill: Wi-Fi hardware radio set enabled Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0371] manager[0x563361390020]: rfkill: WWAN hardware radio set enabled Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0404] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0406] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0408] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0409] manager: Networking is enabled by state file Nov 28 01:43:02 localhost systemd[1]: Started Enable periodic update of entitlement certificates.. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0420] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0421] settings: Loaded settings plugin: keyfile (internal) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0440] dhcp: init: Using DHCP client 'internal' Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0442] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0453] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 28 01:43:02 localhost systemd[1]: Started GSSAPI Proxy Daemon. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0456] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0462] device (lo): Activation: starting connection 'lo' (116e0581-bf2b-4791-a901-61d85cb9c212) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0468] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0470] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0502] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0504] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0505] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0507] device (eth0): carrier: link connected Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0509] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0513] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Nov 28 01:43:02 localhost systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0517] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0521] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0522] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0525] manager: NetworkManager state is now CONNECTING Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0526] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0536] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0538] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Nov 28 01:43:02 localhost systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab). Nov 28 01:43:02 localhost systemd[1]: Reached target NFS client services. Nov 28 01:43:02 localhost systemd[1]: Reached target Preparation for Remote File Systems. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0617] dhcp4 (eth0): state changed new lease, address=38.102.83.53 Nov 28 01:43:02 localhost systemd[1]: Reached target Remote File Systems. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0623] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Nov 28 01:43:02 localhost systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0653] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') Nov 28 01:43:02 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 28 01:43:02 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0870] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0873] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0881] device (lo): Activation: successful, device activated. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0890] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0893] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed') Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0903] manager: NetworkManager state is now CONNECTED_SITE Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0908] device (eth0): Activation: successful, device activated. Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0916] manager: NetworkManager state is now CONNECTED_GLOBAL Nov 28 01:43:02 localhost NetworkManager[788]: [1764312182.0922] manager: startup complete Nov 28 01:43:02 localhost systemd[1]: Finished Network Manager Wait Online. Nov 28 01:43:02 localhost systemd[1]: Starting Initial cloud-init job (metadata service crawler)... Nov 28 01:43:02 localhost cloud-init[995]: Cloud-init v. 22.1-9.el9 running 'init' at Fri, 28 Nov 2025 06:43:02 +0000. Up 6.54 seconds. Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | eth0 | True | 38.102.83.53 | 255.255.255.0 | global | fa:16:3e:93:ca:2d | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | eth0 | True | fe80::f816:3eff:fe93:ca2d/64 | . | link | fa:16:3e:93:ca:2d | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | host | . | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | lo | True | ::1/128 | . | host | . | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | 0 | 0.0.0.0 | 38.102.83.1 | 0.0.0.0 | eth0 | UG | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | 1 | 38.102.83.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | 2 | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 | eth0 | UGH | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +-------+-------------+---------+-----------+-------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | Route | Destination | Gateway | Interface | Flags | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +-------+-------------+---------+-----------+-------+ Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | 1 | fe80::/64 | :: | eth0 | U | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: | 3 | multicast | :: | eth0 | U | Nov 28 01:43:02 localhost cloud-init[995]: ci-info: +-------+-------------+---------+-----------+-------+ Nov 28 01:43:02 localhost polkitd[1033]: Started polkitd version 0.117 Nov 28 01:43:02 localhost systemd[1]: Starting Authorization Manager... Nov 28 01:43:02 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 28 01:43:02 localhost systemd[1]: Started Authorization Manager. Nov 28 01:43:04 localhost cloud-init[995]: Generating public/private rsa key pair. Nov 28 01:43:04 localhost cloud-init[995]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key Nov 28 01:43:04 localhost cloud-init[995]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub Nov 28 01:43:04 localhost cloud-init[995]: The key fingerprint is: Nov 28 01:43:04 localhost cloud-init[995]: SHA256:d9rXMPbh9bnKjfp2LxEfqsw+3ON+ysSYxfJ3/uvflfs root@np0005538515.novalocal Nov 28 01:43:04 localhost cloud-init[995]: The key's randomart image is: Nov 28 01:43:04 localhost cloud-init[995]: +---[RSA 3072]----+ Nov 28 01:43:04 localhost cloud-init[995]: | | Nov 28 01:43:04 localhost cloud-init[995]: | | Nov 28 01:43:04 localhost cloud-init[995]: | | Nov 28 01:43:04 localhost cloud-init[995]: | . .. | Nov 28 01:43:04 localhost cloud-init[995]: | S ...o=+o| Nov 28 01:43:04 localhost cloud-init[995]: | . +Boo=B| Nov 28 01:43:04 localhost cloud-init[995]: | =o++.**| Nov 28 01:43:04 localhost cloud-init[995]: | *+==+*| Nov 28 01:43:04 localhost cloud-init[995]: | .oBXBBE| Nov 28 01:43:04 localhost cloud-init[995]: +----[SHA256]-----+ Nov 28 01:43:04 localhost cloud-init[995]: Generating public/private ecdsa key pair. Nov 28 01:43:04 localhost cloud-init[995]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key Nov 28 01:43:04 localhost cloud-init[995]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub Nov 28 01:43:04 localhost cloud-init[995]: The key fingerprint is: Nov 28 01:43:04 localhost cloud-init[995]: SHA256:U4ZOXeHErZ4n6bWc0xDI5MESnYKy7XJodqSqxRRSX5w root@np0005538515.novalocal Nov 28 01:43:04 localhost cloud-init[995]: The key's randomart image is: Nov 28 01:43:04 localhost cloud-init[995]: +---[ECDSA 256]---+ Nov 28 01:43:04 localhost cloud-init[995]: | . ..o.=++ | Nov 28 01:43:04 localhost cloud-init[995]: | . ...Eoo+B . | Nov 28 01:43:04 localhost cloud-init[995]: | . . .+o +*.+ | Nov 28 01:43:04 localhost cloud-init[995]: | . ..ooo = . | Nov 28 01:43:04 localhost cloud-init[995]: | . =S . o . | Nov 28 01:43:04 localhost cloud-init[995]: | o * +. = + | Nov 28 01:43:04 localhost cloud-init[995]: | o+ + . = = | Nov 28 01:43:04 localhost cloud-init[995]: | .. . = .| Nov 28 01:43:04 localhost cloud-init[995]: | .. . | Nov 28 01:43:04 localhost cloud-init[995]: +----[SHA256]-----+ Nov 28 01:43:04 localhost cloud-init[995]: Generating public/private ed25519 key pair. Nov 28 01:43:04 localhost cloud-init[995]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key Nov 28 01:43:04 localhost cloud-init[995]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub Nov 28 01:43:04 localhost cloud-init[995]: The key fingerprint is: Nov 28 01:43:04 localhost cloud-init[995]: SHA256:REHx2gIqRu/twAcPou5ct8uLzF9aMkAibgMpUuh9OgI root@np0005538515.novalocal Nov 28 01:43:04 localhost cloud-init[995]: The key's randomart image is: Nov 28 01:43:04 localhost cloud-init[995]: +--[ED25519 256]--+ Nov 28 01:43:04 localhost cloud-init[995]: | .. .=o | Nov 28 01:43:04 localhost cloud-init[995]: |.o . . | Nov 28 01:43:04 localhost cloud-init[995]: |B.o. . . . | Nov 28 01:43:04 localhost cloud-init[995]: |Eooo o o o | Nov 28 01:43:04 localhost cloud-init[995]: |.++.B S . | Nov 28 01:43:04 localhost cloud-init[995]: |.+.B.= . | Nov 28 01:43:04 localhost cloud-init[995]: |. ..=++o | Nov 28 01:43:04 localhost cloud-init[995]: |o + ++B | Nov 28 01:43:04 localhost cloud-init[995]: |.+ +.B+ | Nov 28 01:43:04 localhost cloud-init[995]: +----[SHA256]-----+ Nov 28 01:43:04 localhost sm-notify[1127]: Version 2.5.4 starting Nov 28 01:43:04 localhost systemd[1]: Finished Initial cloud-init job (metadata service crawler). Nov 28 01:43:04 localhost systemd[1]: Reached target Cloud-config availability. Nov 28 01:43:04 localhost systemd[1]: Reached target Network is Online. Nov 28 01:43:04 localhost systemd[1]: Starting Apply the settings specified in cloud-config... Nov 28 01:43:04 localhost sshd[1128]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:04 localhost systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot). Nov 28 01:43:04 localhost systemd[1]: Starting Crash recovery kernel arming... Nov 28 01:43:04 localhost systemd[1]: Starting Notify NFS peers of a restart... Nov 28 01:43:04 localhost systemd[1]: Starting OpenSSH server daemon... Nov 28 01:43:04 localhost systemd[1]: Starting Permit User Sessions... Nov 28 01:43:04 localhost systemd[1]: Started Notify NFS peers of a restart. Nov 28 01:43:04 localhost systemd[1]: Started OpenSSH server daemon. Nov 28 01:43:04 localhost systemd[1]: Finished Permit User Sessions. Nov 28 01:43:04 localhost systemd[1]: Started Command Scheduler. Nov 28 01:43:04 localhost systemd[1]: Started Getty on tty1. Nov 28 01:43:04 localhost systemd[1]: Started Serial Getty on ttyS0. Nov 28 01:43:04 localhost systemd[1]: Reached target Login Prompts. Nov 28 01:43:04 localhost systemd[1]: Reached target Multi-User System. Nov 28 01:43:04 localhost systemd[1]: Starting Record Runlevel Change in UTMP... Nov 28 01:43:04 localhost systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 28 01:43:04 localhost systemd[1]: Finished Record Runlevel Change in UTMP. Nov 28 01:43:04 localhost kdumpctl[1131]: kdump: No kdump initial ramdisk found. Nov 28 01:43:04 localhost kdumpctl[1131]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img Nov 28 01:43:04 localhost cloud-init[1247]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Fri, 28 Nov 2025 06:43:04 +0000. Up 9.00 seconds. Nov 28 01:43:04 localhost systemd[1]: Finished Apply the settings specified in cloud-config. Nov 28 01:43:04 localhost systemd[1]: Starting Execute cloud user/final scripts... Nov 28 01:43:05 localhost sshd[1327]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1348]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1365]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1376]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1390]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1399]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1421]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost dracut[1426]: dracut-057-21.git20230214.el9 Nov 28 01:43:05 localhost sshd[1428]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost sshd[1444]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:05 localhost cloud-init[1448]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Fri, 28 Nov 2025 06:43:05 +0000. Up 9.37 seconds. Nov 28 01:43:05 localhost cloud-init[1451]: ############################################################# Nov 28 01:43:05 localhost cloud-init[1453]: -----BEGIN SSH HOST KEY FINGERPRINTS----- Nov 28 01:43:05 localhost cloud-init[1457]: 256 SHA256:U4ZOXeHErZ4n6bWc0xDI5MESnYKy7XJodqSqxRRSX5w root@np0005538515.novalocal (ECDSA) Nov 28 01:43:05 localhost dracut[1430]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64 Nov 28 01:43:05 localhost cloud-init[1465]: 256 SHA256:REHx2gIqRu/twAcPou5ct8uLzF9aMkAibgMpUuh9OgI root@np0005538515.novalocal (ED25519) Nov 28 01:43:05 localhost cloud-init[1473]: 3072 SHA256:d9rXMPbh9bnKjfp2LxEfqsw+3ON+ysSYxfJ3/uvflfs root@np0005538515.novalocal (RSA) Nov 28 01:43:05 localhost cloud-init[1475]: -----END SSH HOST KEY FINGERPRINTS----- Nov 28 01:43:05 localhost cloud-init[1477]: ############################################################# Nov 28 01:43:05 localhost cloud-init[1448]: Cloud-init v. 22.1-9.el9 finished at Fri, 28 Nov 2025 06:43:05 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0]. Up 9.61 seconds Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Nov 28 01:43:05 localhost systemd[1]: Reloading Network Manager... Nov 28 01:43:05 localhost dracut[1430]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Nov 28 01:43:05 localhost NetworkManager[788]: [1764312185.5332] audit: op="reload" arg="0" pid=1595 uid=0 result="success" Nov 28 01:43:05 localhost NetworkManager[788]: [1764312185.5343] config: signal: SIGHUP (no changes from disk) Nov 28 01:43:05 localhost dracut[1430]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Nov 28 01:43:05 localhost systemd[1]: Reloaded Network Manager. Nov 28 01:43:05 localhost dracut[1430]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Nov 28 01:43:05 localhost systemd[1]: Finished Execute cloud user/final scripts. Nov 28 01:43:05 localhost systemd[1]: Reached target Cloud-init target. Nov 28 01:43:05 localhost dracut[1430]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Nov 28 01:43:05 localhost dracut[1430]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Nov 28 01:43:05 localhost dracut[1430]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Nov 28 01:43:06 localhost dracut[1430]: memstrack is not available Nov 28 01:43:06 localhost dracut[1430]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Nov 28 01:43:06 localhost dracut[1430]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'connman' will not be installed, because command 'connmand' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found! Nov 28 01:43:06 localhost dracut[1430]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found! Nov 28 01:43:06 localhost dracut[1430]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found! Nov 28 01:43:06 localhost dracut[1430]: memstrack is not available Nov 28 01:43:06 localhost dracut[1430]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng Nov 28 01:43:06 localhost dracut[1430]: *** Including module: systemd *** Nov 28 01:43:06 localhost dracut[1430]: *** Including module: systemd-initrd *** Nov 28 01:43:06 localhost dracut[1430]: *** Including module: i18n *** Nov 28 01:43:06 localhost dracut[1430]: No KEYMAP configured. Nov 28 01:43:06 localhost dracut[1430]: *** Including module: drm *** Nov 28 01:43:07 localhost chronyd[765]: Selected source 206.108.0.131 (2.rhel.pool.ntp.org) Nov 28 01:43:07 localhost chronyd[765]: System clock TAI offset set to 37 seconds Nov 28 01:43:07 localhost dracut[1430]: *** Including module: prefixdevname *** Nov 28 01:43:07 localhost dracut[1430]: *** Including module: kernel-modules *** Nov 28 01:43:07 localhost dracut[1430]: *** Including module: kernel-modules-extra *** Nov 28 01:43:07 localhost dracut[1430]: *** Including module: qemu *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: fstab-sys *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: rootfs-block *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: terminfo *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: udev-rules *** Nov 28 01:43:08 localhost dracut[1430]: Skipping udev rule: 91-permissions.rules Nov 28 01:43:08 localhost dracut[1430]: Skipping udev rule: 80-drivers-modprobe.rules Nov 28 01:43:08 localhost dracut[1430]: *** Including module: virtiofs *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: dracut-systemd *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: usrmount *** Nov 28 01:43:08 localhost dracut[1430]: *** Including module: base *** Nov 28 01:43:09 localhost dracut[1430]: *** Including module: fs-lib *** Nov 28 01:43:09 localhost dracut[1430]: *** Including module: kdumpbase *** Nov 28 01:43:09 localhost dracut[1430]: *** Including module: microcode_ctl-fw_dir_override *** Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl module: mangling fw_dir Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-2d-07" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-4e-03" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-4f-01" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-55-04" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-5e-03" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-8c-01" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: processing data directory "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"... Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored Nov 28 01:43:09 localhost dracut[1430]: microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware" Nov 28 01:43:09 localhost dracut[1430]: *** Including module: shutdown *** Nov 28 01:43:09 localhost dracut[1430]: *** Including module: squash *** Nov 28 01:43:10 localhost dracut[1430]: *** Including modules done *** Nov 28 01:43:10 localhost dracut[1430]: *** Installing kernel module dependencies *** Nov 28 01:43:10 localhost dracut[1430]: *** Installing kernel module dependencies done *** Nov 28 01:43:10 localhost dracut[1430]: *** Resolving executable dependencies *** Nov 28 01:43:11 localhost dracut[1430]: *** Resolving executable dependencies done *** Nov 28 01:43:11 localhost dracut[1430]: *** Hardlinking files *** Nov 28 01:43:11 localhost dracut[1430]: Mode: real Nov 28 01:43:11 localhost dracut[1430]: Files: 1099 Nov 28 01:43:11 localhost dracut[1430]: Linked: 3 files Nov 28 01:43:11 localhost dracut[1430]: Compared: 0 xattrs Nov 28 01:43:11 localhost dracut[1430]: Compared: 373 files Nov 28 01:43:12 localhost dracut[1430]: Saved: 61.04 KiB Nov 28 01:43:12 localhost dracut[1430]: Duration: 0.018742 seconds Nov 28 01:43:12 localhost dracut[1430]: *** Hardlinking files done *** Nov 28 01:43:12 localhost dracut[1430]: Could not find 'strip'. Not stripping the initramfs. Nov 28 01:43:12 localhost dracut[1430]: *** Generating early-microcode cpio image *** Nov 28 01:43:12 localhost dracut[1430]: *** Constructing AuthenticAMD.bin *** Nov 28 01:43:12 localhost dracut[1430]: *** Store current command line parameters *** Nov 28 01:43:12 localhost dracut[1430]: Stored kernel commandline: Nov 28 01:43:12 localhost dracut[1430]: No dracut internal kernel commandline stored in the initramfs Nov 28 01:43:12 localhost dracut[1430]: *** Install squash loader *** Nov 28 01:43:12 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 28 01:43:12 localhost dracut[1430]: *** Squashing the files inside the initramfs *** Nov 28 01:43:13 localhost dracut[1430]: *** Squashing the files inside the initramfs done *** Nov 28 01:43:13 localhost dracut[1430]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' *** Nov 28 01:43:14 localhost dracut[1430]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done *** Nov 28 01:43:14 localhost kdumpctl[1131]: kdump: kexec: loaded kdump kernel Nov 28 01:43:14 localhost kdumpctl[1131]: kdump: Starting kdump: [OK] Nov 28 01:43:14 localhost systemd[1]: Finished Crash recovery kernel arming. Nov 28 01:43:14 localhost systemd[1]: Startup finished in 1.525s (kernel) + 2.012s (initrd) + 15.244s (userspace) = 18.782s. Nov 28 01:43:32 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 28 01:43:40 localhost sshd[4170]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:43:40 localhost systemd[1]: Created slice User Slice of UID 1000. Nov 28 01:43:40 localhost systemd[1]: Starting User Runtime Directory /run/user/1000... Nov 28 01:43:40 localhost systemd-logind[763]: New session 1 of user zuul. Nov 28 01:43:41 localhost systemd[1]: Finished User Runtime Directory /run/user/1000. Nov 28 01:43:41 localhost systemd[1]: Starting User Manager for UID 1000... Nov 28 01:43:41 localhost systemd[4174]: Queued start job for default target Main User Target. Nov 28 01:43:41 localhost systemd[4174]: Created slice User Application Slice. Nov 28 01:43:41 localhost systemd[4174]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 01:43:41 localhost systemd[4174]: Started Daily Cleanup of User's Temporary Directories. Nov 28 01:43:41 localhost systemd[4174]: Reached target Paths. Nov 28 01:43:41 localhost systemd[4174]: Reached target Timers. Nov 28 01:43:41 localhost systemd[4174]: Starting D-Bus User Message Bus Socket... Nov 28 01:43:41 localhost systemd[4174]: Starting Create User's Volatile Files and Directories... Nov 28 01:43:41 localhost systemd[4174]: Finished Create User's Volatile Files and Directories. Nov 28 01:43:41 localhost systemd[4174]: Listening on D-Bus User Message Bus Socket. Nov 28 01:43:41 localhost systemd[4174]: Reached target Sockets. Nov 28 01:43:41 localhost systemd[4174]: Reached target Basic System. Nov 28 01:43:41 localhost systemd[4174]: Reached target Main User Target. Nov 28 01:43:41 localhost systemd[4174]: Startup finished in 113ms. Nov 28 01:43:41 localhost systemd[1]: Started User Manager for UID 1000. Nov 28 01:43:41 localhost systemd[1]: Started Session 1 of User zuul. Nov 28 01:43:41 localhost python3[4227]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 01:43:51 localhost python3[4245]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 01:43:56 localhost python3[4298]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 01:43:58 localhost python3[4328]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present Nov 28 01:44:01 localhost python3[4344]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:01 localhost python3[4358]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:03 localhost python3[4417]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:03 localhost python3[4458]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764312242.7710705-395-6155547921769/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=4237e6bc560e46d9aa55417d911b2c55_id_rsa follow=False checksum=47f6a2f8fa426c1f34aad346f88073a22928af4e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:04 localhost python3[4531]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:05 localhost python3[4572]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764312244.4699037-496-47380771635991/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=4237e6bc560e46d9aa55417d911b2c55_id_rsa.pub follow=False checksum=d1f12d852c72cfefab089d88337552962cfbc93d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:06 localhost python3[4600]: ansible-ping Invoked with data=pong Nov 28 01:44:09 localhost python3[4614]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 01:44:13 localhost python3[4667]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None Nov 28 01:44:13 localhost chronyd[765]: Selected source 149.56.19.163 (2.rhel.pool.ntp.org) Nov 28 01:44:15 localhost python3[4689]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:16 localhost python3[4703]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:16 localhost python3[4717]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:17 localhost python3[4731]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:17 localhost python3[4745]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:18 localhost python3[4759]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:20 localhost python3[4775]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:22 localhost python3[4824]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:22 localhost python3[4867]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764312262.2043922-106-134507180075167/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:31 localhost python3[4895]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:31 localhost python3[4909]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:31 localhost python3[4923]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:31 localhost python3[4937]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:32 localhost python3[4951]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:32 localhost python3[4965]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:32 localhost python3[4979]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:32 localhost python3[4993]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:33 localhost python3[5007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:33 localhost python3[5021]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:33 localhost python3[5035]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:33 localhost python3[5049]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:34 localhost python3[5063]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:34 localhost python3[5077]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:34 localhost python3[5091]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:34 localhost python3[5105]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:35 localhost python3[5119]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:35 localhost python3[5133]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:35 localhost python3[5147]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:35 localhost python3[5161]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:36 localhost python3[5175]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:36 localhost python3[5189]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:36 localhost python3[5203]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:36 localhost python3[5217]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:37 localhost python3[5231]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:37 localhost python3[5245]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 01:44:38 localhost python3[5261]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Nov 28 01:44:38 localhost systemd[1]: Starting Time & Date Service... Nov 28 01:44:39 localhost systemd[1]: Started Time & Date Service. Nov 28 01:44:39 localhost systemd-timedated[5263]: Changed time zone to 'UTC' (UTC). Nov 28 01:44:39 localhost python3[5282]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:40 localhost python3[5328]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:41 localhost python3[5369]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764312280.7429726-497-121646648876435/source _original_basename=tmplqmmkecs follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:42 localhost python3[5429]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:42 localhost python3[5470]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764312282.2725708-588-225791012218072/source _original_basename=tmp_40x8ufp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:44 localhost python3[5532]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:45 localhost python3[5575]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764312284.38717-732-123433027138430/source _original_basename=tmpteo97krb follow=False checksum=1cc2ea2b76967ada2d4710a35e138c3751da2100 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:46 localhost python3[5603]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:44:46 localhost python3[5619]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:44:47 localhost python3[5669]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:44:47 localhost python3[5712]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764312287.4328728-857-10663037658759/source _original_basename=tmpo3n9mue_ follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:44:59 localhost python3[5743]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-161e-20ee-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:45:09 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 28 01:45:10 localhost python3[5765]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-161e-20ee-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None Nov 28 01:45:12 localhost python3[5783]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:45:31 localhost python3[5799]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:45:53 localhost systemd[4174]: Starting Mark boot as successful... Nov 28 01:45:53 localhost systemd[4174]: Finished Mark boot as successful. Nov 28 01:46:31 localhost systemd-logind[763]: Session 1 logged out. Waiting for processes to exit. Nov 28 01:46:47 localhost sshd[5803]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:47:02 localhost systemd[1]: Unmounting EFI System Partition Automount... Nov 28 01:47:02 localhost systemd[1]: efi.mount: Deactivated successfully. Nov 28 01:47:02 localhost systemd[1]: Unmounted EFI System Partition Automount. Nov 28 01:48:53 localhost systemd[4174]: Created slice User Background Tasks Slice. Nov 28 01:48:53 localhost systemd[4174]: Starting Cleanup of User's Temporary Files and Directories... Nov 28 01:48:53 localhost systemd[4174]: Finished Cleanup of User's Temporary Files and Directories. Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: reg 0x10: [io 0x0000-0x003f] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff] Nov 28 01:49:48 localhost kernel: pci 0000:00:07.0: BAR 0: assigned [io 0x1000-0x103f] Nov 28 01:49:48 localhost kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003) Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2147] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Nov 28 01:49:48 localhost systemd-udevd[5809]: Network interface NamePolicy= disabled on kernel command line. Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2288] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2309] settings: (eth1): created default wired connection 'Wired connection 1' Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2312] device (eth1): carrier: link connected Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2314] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed') Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2318] policy: auto-activating connection 'Wired connection 1' (3f6b1ecd-3b33-3888-bbb5-7c383df6ee7e) Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2322] device (eth1): Activation: starting connection 'Wired connection 1' (3f6b1ecd-3b33-3888-bbb5-7c383df6ee7e) Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2323] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed') Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2326] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed') Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2330] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed') Nov 28 01:49:48 localhost NetworkManager[788]: [1764312588.2332] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Nov 28 01:49:48 localhost sshd[5813]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:49:48 localhost systemd-logind[763]: New session 3 of user zuul. Nov 28 01:49:48 localhost systemd[1]: Started Session 3 of User zuul. Nov 28 01:49:49 localhost python3[5830]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-b1a9-fc65-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:49:49 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Nov 28 01:50:02 localhost python3[5880]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:50:02 localhost python3[5923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764312602.1217651-537-163600387487933/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=45f8b686f86847d91097e2a4a4bdd4c78853fa25 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:50:03 localhost python3[5953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 01:50:03 localhost systemd[1]: NetworkManager-wait-online.service: Deactivated successfully. Nov 28 01:50:03 localhost systemd[1]: Stopped Network Manager Wait Online. Nov 28 01:50:03 localhost systemd[1]: Stopping Network Manager Wait Online... Nov 28 01:50:03 localhost systemd[1]: Stopping Network Manager... Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4550] caught SIGTERM, shutting down normally. Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4681] dhcp4 (eth0): canceled DHCP transaction Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4682] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4682] dhcp4 (eth0): state changed no lease Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4686] manager: NetworkManager state is now CONNECTING Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4772] dhcp4 (eth1): canceled DHCP transaction Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4772] dhcp4 (eth1): state changed no lease Nov 28 01:50:03 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 28 01:50:03 localhost NetworkManager[788]: [1764312603.4842] exiting (success) Nov 28 01:50:03 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 28 01:50:03 localhost systemd[1]: NetworkManager.service: Deactivated successfully. Nov 28 01:50:03 localhost systemd[1]: Stopped Network Manager. Nov 28 01:50:03 localhost systemd[1]: NetworkManager.service: Consumed 2.630s CPU time. Nov 28 01:50:03 localhost systemd[1]: Starting Network Manager... Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.5417] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:a439187c-a774-4883-a00c-1a7b4e2aa22a) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.5420] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.5446] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager" Nov 28 01:50:03 localhost systemd[1]: Started Network Manager. Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.5495] manager[0x561c475d8090]: monitoring kernel firmware directory '/lib/firmware'. Nov 28 01:50:03 localhost systemd[1]: Starting Network Manager Wait Online... Nov 28 01:50:03 localhost systemd[1]: Starting Hostname Service... Nov 28 01:50:03 localhost systemd[1]: Started Hostname Service. Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6316] hostname: hostname: using hostnamed Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6316] hostname: static hostname changed from (none) to "np0005538515.novalocal" Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6325] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6333] manager[0x561c475d8090]: rfkill: Wi-Fi hardware radio set enabled Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6334] manager[0x561c475d8090]: rfkill: WWAN hardware radio set enabled Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6385] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6387] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6389] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6389] manager: Networking is enabled by state file Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6400] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so") Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6401] settings: Loaded settings plugin: keyfile (internal) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6467] dhcp: init: Using DHCP client 'internal' Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6471] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6490] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6508] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6528] device (lo): Activation: starting connection 'lo' (116e0581-bf2b-4791-a901-61d85cb9c212) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6537] device (eth0): carrier: link connected Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6543] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6549] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6549] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6556] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6565] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6572] device (eth1): carrier: link connected Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6578] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6585] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (3f6b1ecd-3b33-3888-bbb5-7c383df6ee7e) (indicated) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6586] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6593] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6603] device (eth1): Activation: starting connection 'Wired connection 1' (3f6b1ecd-3b33-3888-bbb5-7c383df6ee7e) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6629] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6634] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6638] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6640] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6647] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6650] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6654] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6658] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6682] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6689] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6701] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6705] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6748] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6756] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6765] device (lo): Activation: successful, device activated. Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6775] dhcp4 (eth0): state changed new lease, address=38.102.83.53 Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6780] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6902] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6948] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6951] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6956] manager: NetworkManager state is now CONNECTED_SITE Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6960] device (eth0): Activation: successful, device activated. Nov 28 01:50:03 localhost NetworkManager[5965]: [1764312603.6967] manager: NetworkManager state is now CONNECTED_GLOBAL Nov 28 01:50:03 localhost python3[6026]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-b1a9-fc65-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:50:13 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 28 01:50:33 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 28 01:50:48 localhost NetworkManager[5965]: [1764312648.7792] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:48 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 28 01:50:48 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 28 01:50:48 localhost NetworkManager[5965]: [1764312648.8026] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:48 localhost NetworkManager[5965]: [1764312648.8033] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume') Nov 28 01:50:48 localhost NetworkManager[5965]: [1764312648.8052] device (eth1): Activation: successful, device activated. Nov 28 01:50:48 localhost NetworkManager[5965]: [1764312648.8063] manager: startup complete Nov 28 01:50:48 localhost systemd[1]: Finished Network Manager Wait Online. Nov 28 01:50:58 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 28 01:51:03 localhost systemd[1]: session-3.scope: Deactivated successfully. Nov 28 01:51:03 localhost systemd[1]: session-3.scope: Consumed 1.515s CPU time. Nov 28 01:51:03 localhost systemd-logind[763]: Session 3 logged out. Waiting for processes to exit. Nov 28 01:51:04 localhost systemd-logind[763]: Removed session 3. Nov 28 01:51:25 localhost sshd[6053]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:51:25 localhost systemd-logind[763]: New session 4 of user zuul. Nov 28 01:51:25 localhost systemd[1]: Started Session 4 of User zuul. Nov 28 01:51:25 localhost python3[6104]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:51:25 localhost python3[6147]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764312685.3060608-628-204986376716689/source _original_basename=tmpq83qr24o follow=False checksum=10225105ecbcb8380becb3ed8e03293c5f034347 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:51:28 localhost systemd[1]: session-4.scope: Deactivated successfully. Nov 28 01:51:28 localhost systemd-logind[763]: Session 4 logged out. Waiting for processes to exit. Nov 28 01:51:28 localhost systemd-logind[763]: Removed session 4. Nov 28 01:54:41 localhost sshd[6163]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:56:53 localhost systemd[1]: Starting dnf makecache... Nov 28 01:56:54 localhost dnf[6164]: Failed determining last makecache time. Nov 28 01:56:54 localhost dnf[6164]: There are no enabled repositories in "/etc/yum.repos.d", "/etc/yum/repos.d", "/etc/distro.repos.d". Nov 28 01:56:54 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Nov 28 01:56:54 localhost systemd[1]: Finished dnf makecache. Nov 28 01:58:43 localhost systemd[1]: Starting Cleanup of Temporary Directories... Nov 28 01:58:43 localhost systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Nov 28 01:58:43 localhost systemd[1]: Finished Cleanup of Temporary Directories. Nov 28 01:58:43 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Nov 28 01:58:59 localhost sshd[6169]: main: sshd: ssh-rsa algorithm is disabled Nov 28 01:58:59 localhost systemd-logind[763]: New session 5 of user zuul. Nov 28 01:58:59 localhost systemd[1]: Started Session 5 of User zuul. Nov 28 01:58:59 localhost python3[6188]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-0218-9e4d-000000001d10-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:59:01 localhost python3[6207]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:59:01 localhost python3[6223]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:59:01 localhost python3[6239]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:59:01 localhost python3[6255]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:59:02 localhost python3[6271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:59:03 localhost python3[6319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 01:59:04 localhost python3[6362]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764313143.556926-652-35063446022834/source _original_basename=tmpj7q725jz follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 01:59:05 localhost python3[6392]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 01:59:05 localhost systemd[1]: Reloading. Nov 28 01:59:05 localhost systemd-rc-local-generator[6410]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 01:59:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 01:59:07 localhost python3[6439]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None Nov 28 01:59:08 localhost python3[6455]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:59:09 localhost python3[6473]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:59:09 localhost python3[6491]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:59:09 localhost python3[6509]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0 riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:59:20 localhost python3[6526]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init"; cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system"; cat /sys/fs/cgroup/system.slice/io.max; echo "user"; cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-0218-9e4d-000000001d17-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 01:59:21 localhost python3[6546]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 01:59:24 localhost systemd[1]: session-5.scope: Deactivated successfully. Nov 28 01:59:24 localhost systemd[1]: session-5.scope: Consumed 3.997s CPU time. Nov 28 01:59:24 localhost systemd-logind[763]: Session 5 logged out. Waiting for processes to exit. Nov 28 01:59:24 localhost systemd-logind[763]: Removed session 5. Nov 28 02:00:50 localhost sshd[6552]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:00:50 localhost systemd-logind[763]: New session 6 of user zuul. Nov 28 02:00:50 localhost systemd[1]: Started Session 6 of User zuul. Nov 28 02:00:51 localhost systemd[1]: Starting RHSM dbus service... Nov 28 02:00:51 localhost systemd[1]: Started RHSM dbus service. Nov 28 02:00:51 localhost rhsm-service[6576]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Nov 28 02:00:51 localhost rhsm-service[6576]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Nov 28 02:00:51 localhost rhsm-service[6576]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Nov 28 02:00:51 localhost rhsm-service[6576]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Nov 28 02:00:55 localhost rhsm-service[6576]: INFO [subscription_manager.managerlib:90] Consumer created: np0005538515.novalocal (c20224ed-ba86-41a6-a487-b9546587a93c) Nov 28 02:00:55 localhost subscription-manager[6576]: Registered system with identity: c20224ed-ba86-41a6-a487-b9546587a93c Nov 28 02:00:55 localhost rhsm-service[6576]: INFO [subscription_manager.entcertlib:131] certs updated: Nov 28 02:00:55 localhost rhsm-service[6576]: Total updates: 1 Nov 28 02:00:55 localhost rhsm-service[6576]: Found (local) serial# [] Nov 28 02:00:55 localhost rhsm-service[6576]: Expected (UEP) serial# [7824755758168854409] Nov 28 02:00:55 localhost rhsm-service[6576]: Added (new) Nov 28 02:00:55 localhost rhsm-service[6576]: [sn:7824755758168854409 ( Content Access,) @ /etc/pki/entitlement/7824755758168854409.pem] Nov 28 02:00:55 localhost rhsm-service[6576]: Deleted (rogue): Nov 28 02:00:55 localhost rhsm-service[6576]: Nov 28 02:00:55 localhost subscription-manager[6576]: Added subscription for 'Content Access' contract 'None' Nov 28 02:00:55 localhost subscription-manager[6576]: Added subscription for product ' Content Access' Nov 28 02:00:57 localhost rhsm-service[6576]: INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm' Nov 28 02:00:57 localhost rhsm-service[6576]: INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm' Nov 28 02:00:57 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:00:57 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:00:57 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:00:57 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:00:58 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:00:59 localhost python3[6667]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-cf29-7b10-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:01:52 localhost python3[6701]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:02:22 localhost setsebool[6776]: The virt_use_nfs policy boolean was changed to 1 by root Nov 28 02:02:22 localhost setsebool[6776]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root Nov 28 02:02:33 localhost kernel: SELinux: Converting 410 SID table entries... Nov 28 02:02:33 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:02:33 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:02:33 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:02:33 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:02:33 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:02:33 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:02:33 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:02:45 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=3 res=1 Nov 28 02:02:45 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:02:45 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:02:45 localhost systemd[1]: Reloading. Nov 28 02:02:45 localhost systemd-rc-local-generator[7646]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:02:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:02:45 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:02:47 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:02:47 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:02:49 localhost systemd[1]: var-lib-containers-storage-overlay-compat2983557930-merged.mount: Deactivated successfully. Nov 28 02:02:49 localhost podman[13101]: 2025-11-28 07:02:49.621734953 +0000 UTC m=+0.090962412 system refresh Nov 28 02:02:50 localhost systemd[4174]: Starting D-Bus User Message Bus... Nov 28 02:02:50 localhost dbus-broker-launch[14507]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Nov 28 02:02:50 localhost dbus-broker-launch[14507]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Nov 28 02:02:50 localhost systemd[4174]: Started D-Bus User Message Bus. Nov 28 02:02:50 localhost journal[14507]: Ready Nov 28 02:02:50 localhost systemd[4174]: selinux: avc: op=load_policy lsm=selinux seqno=3 res=1 Nov 28 02:02:50 localhost systemd[4174]: Created slice Slice /user. Nov 28 02:02:50 localhost systemd[4174]: podman-14354.scope: unit configures an IP firewall, but not running as root. Nov 28 02:02:50 localhost systemd[4174]: (This warning is only shown for the first unit using IP firewalling.) Nov 28 02:02:50 localhost systemd[4174]: Started podman-14354.scope. Nov 28 02:02:50 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:02:50 localhost systemd[4174]: Started podman-pause-ee9564e6.scope. Nov 28 02:02:51 localhost systemd[1]: session-6.scope: Deactivated successfully. Nov 28 02:02:51 localhost systemd[1]: session-6.scope: Consumed 52.581s CPU time. Nov 28 02:02:51 localhost systemd-logind[763]: Session 6 logged out. Waiting for processes to exit. Nov 28 02:02:51 localhost systemd-logind[763]: Removed session 6. Nov 28 02:02:53 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:02:53 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:02:53 localhost systemd[1]: man-db-cache-update.service: Consumed 9.319s CPU time. Nov 28 02:02:53 localhost systemd[1]: run-r866cffe61bf245e8b4b329f1c339457e.service: Deactivated successfully. Nov 28 02:03:07 localhost sshd[18432]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:03:07 localhost sshd[18433]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:03:07 localhost sshd[18431]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:03:07 localhost sshd[18434]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:03:07 localhost sshd[18435]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:03:11 localhost sshd[18441]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:03:11 localhost systemd-logind[763]: New session 7 of user zuul. Nov 28 02:03:11 localhost systemd[1]: Started Session 7 of User zuul. Nov 28 02:03:11 localhost python3[18458]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCwbdeJV6VDDXadsSf2RG5X7kz/GTOF493/FPhPlXmY8LaEjIgaNVgahbrG06qkZx72vk0TqexyzHBymiNAuWIc= zuul@np0005538507.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:03:12 localhost python3[18474]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCwbdeJV6VDDXadsSf2RG5X7kz/GTOF493/FPhPlXmY8LaEjIgaNVgahbrG06qkZx72vk0TqexyzHBymiNAuWIc= zuul@np0005538507.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:03:14 localhost systemd[1]: session-7.scope: Deactivated successfully. Nov 28 02:03:14 localhost systemd-logind[763]: Session 7 logged out. Waiting for processes to exit. Nov 28 02:03:14 localhost systemd-logind[763]: Removed session 7. Nov 28 02:04:48 localhost sshd[18476]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:04:49 localhost systemd-logind[763]: New session 8 of user zuul. Nov 28 02:04:49 localhost systemd[1]: Started Session 8 of User zuul. Nov 28 02:04:49 localhost python3[18495]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:04:50 localhost python3[18511]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005538515.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Nov 28 02:04:52 localhost python3[18561]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:04:52 localhost python3[18604]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764313491.736329-139-40109814909018/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=4237e6bc560e46d9aa55417d911b2c55_id_rsa follow=False checksum=47f6a2f8fa426c1f34aad346f88073a22928af4e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:04:53 localhost python3[18667]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:04:53 localhost python3[18710]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764313493.405782-229-146659391301234/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=4237e6bc560e46d9aa55417d911b2c55_id_rsa.pub follow=False checksum=d1f12d852c72cfefab089d88337552962cfbc93d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:04:56 localhost python3[18740]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:04:57 localhost python3[18786]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:04:57 localhost python3[18802]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmpjahlwwbq recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:04:58 localhost python3[18862]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:04:59 localhost python3[18878]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpvrm7mxis recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:05:00 localhost python3[18938]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:05:00 localhost python3[18954]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpzbuw0znv recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:05:01 localhost systemd[1]: session-8.scope: Deactivated successfully. Nov 28 02:05:01 localhost systemd[1]: session-8.scope: Consumed 3.519s CPU time. Nov 28 02:05:01 localhost systemd-logind[763]: Session 8 logged out. Waiting for processes to exit. Nov 28 02:05:01 localhost systemd-logind[763]: Removed session 8. Nov 28 02:07:24 localhost sshd[18971]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:07:24 localhost systemd-logind[763]: New session 9 of user zuul. Nov 28 02:07:24 localhost systemd[1]: Started Session 9 of User zuul. Nov 28 02:07:25 localhost python3[19017]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:11:22 localhost sshd[19020]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:24 localhost sshd[19022]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:26 localhost sshd[19024]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:27 localhost sshd[19026]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:29 localhost sshd[19028]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:30 localhost sshd[19030]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:32 localhost sshd[19032]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:33 localhost sshd[19034]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:35 localhost sshd[19037]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:37 localhost sshd[19039]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:38 localhost sshd[19041]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:40 localhost sshd[19043]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:42 localhost sshd[19045]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:43 localhost sshd[19047]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:45 localhost sshd[19049]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:46 localhost sshd[19051]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:48 localhost sshd[19053]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:49 localhost sshd[19055]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:51 localhost sshd[19057]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:53 localhost sshd[19059]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:54 localhost sshd[19061]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:56 localhost sshd[19063]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:11:57 localhost sshd[19065]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:00 localhost sshd[19067]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:01 localhost sshd[19069]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:03 localhost sshd[19071]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:05 localhost sshd[19073]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:06 localhost sshd[19075]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:08 localhost sshd[19077]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:09 localhost sshd[19079]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:11 localhost sshd[19081]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:13 localhost sshd[19083]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:14 localhost sshd[19085]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:16 localhost sshd[19087]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:17 localhost sshd[19089]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:19 localhost sshd[19091]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:20 localhost sshd[19093]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:22 localhost sshd[19095]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:24 localhost sshd[19097]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:24 localhost systemd[1]: session-9.scope: Deactivated successfully. Nov 28 02:12:24 localhost systemd-logind[763]: Session 9 logged out. Waiting for processes to exit. Nov 28 02:12:24 localhost systemd-logind[763]: Removed session 9. Nov 28 02:12:25 localhost sshd[19100]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:27 localhost sshd[19102]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:28 localhost sshd[19104]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:30 localhost sshd[19106]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:32 localhost sshd[19108]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:33 localhost sshd[19110]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:35 localhost sshd[19112]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:37 localhost sshd[19114]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:38 localhost sshd[19116]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:40 localhost sshd[19118]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:41 localhost sshd[19120]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:43 localhost sshd[19122]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:44 localhost sshd[19124]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:46 localhost sshd[19126]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:48 localhost sshd[19128]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:49 localhost sshd[19130]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:51 localhost sshd[19132]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:52 localhost sshd[19134]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:54 localhost sshd[19136]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:56 localhost sshd[19138]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:57 localhost sshd[19140]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:12:59 localhost sshd[19142]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:00 localhost sshd[19144]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:02 localhost sshd[19146]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:03 localhost sshd[19148]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:05 localhost sshd[19150]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:07 localhost sshd[19152]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:08 localhost sshd[19154]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:10 localhost sshd[19156]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:11 localhost sshd[19158]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:13 localhost sshd[19160]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:14 localhost sshd[19162]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:16 localhost sshd[19164]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:18 localhost sshd[19166]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:19 localhost sshd[19168]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:21 localhost sshd[19170]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:22 localhost sshd[19172]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:24 localhost sshd[19174]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:26 localhost sshd[19176]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:27 localhost sshd[19178]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:29 localhost sshd[19180]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:30 localhost sshd[19182]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:32 localhost sshd[19184]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:34 localhost sshd[19186]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:36 localhost sshd[19188]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:38 localhost sshd[19190]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:39 localhost sshd[19192]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:41 localhost sshd[19194]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:42 localhost sshd[19196]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:44 localhost sshd[19198]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:45 localhost sshd[19200]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:47 localhost sshd[19202]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:48 localhost sshd[19204]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:50 localhost sshd[19206]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:52 localhost sshd[19208]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:53 localhost sshd[19210]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:55 localhost sshd[19212]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:56 localhost sshd[19214]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:13:58 localhost sshd[19216]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:00 localhost sshd[19218]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:01 localhost sshd[19220]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:03 localhost sshd[19222]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:04 localhost sshd[19224]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:06 localhost sshd[19226]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:08 localhost sshd[19228]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:09 localhost sshd[19230]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:11 localhost sshd[19232]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:12 localhost sshd[19234]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:14 localhost sshd[19236]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:16 localhost sshd[19238]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:17 localhost sshd[19240]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:19 localhost sshd[19242]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:20 localhost sshd[19244]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:22 localhost sshd[19246]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:23 localhost sshd[19248]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:25 localhost sshd[19250]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:27 localhost sshd[19252]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:28 localhost sshd[19254]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:30 localhost sshd[19256]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:31 localhost sshd[19258]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:33 localhost sshd[19260]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:35 localhost sshd[19262]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:36 localhost sshd[19264]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:38 localhost sshd[19266]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:39 localhost sshd[19268]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:41 localhost sshd[19270]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:43 localhost sshd[19272]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:44 localhost sshd[19274]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:46 localhost sshd[19276]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:47 localhost sshd[19278]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:49 localhost sshd[19280]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:51 localhost sshd[19282]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:52 localhost sshd[19284]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:54 localhost sshd[19286]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:56 localhost sshd[19288]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:57 localhost sshd[19290]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:14:59 localhost sshd[19292]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:00 localhost sshd[19294]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:02 localhost sshd[19296]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:04 localhost sshd[19298]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:05 localhost sshd[19300]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:07 localhost sshd[19302]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:08 localhost sshd[19304]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:10 localhost sshd[19306]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:12 localhost sshd[19308]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:13 localhost sshd[19310]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:15 localhost sshd[19312]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:16 localhost sshd[19314]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:18 localhost sshd[19316]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:20 localhost sshd[19318]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:21 localhost sshd[19320]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:23 localhost sshd[19322]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:24 localhost sshd[19324]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:26 localhost sshd[19326]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:27 localhost sshd[19328]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:29 localhost sshd[19330]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:31 localhost sshd[19332]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:32 localhost sshd[19334]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:34 localhost sshd[19336]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:35 localhost sshd[19338]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:37 localhost sshd[19340]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:39 localhost sshd[19342]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:40 localhost sshd[19344]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:42 localhost sshd[19346]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:43 localhost sshd[19348]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:45 localhost sshd[19350]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:47 localhost sshd[19352]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:48 localhost sshd[19355]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:50 localhost sshd[19357]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:52 localhost sshd[19359]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:53 localhost sshd[19361]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:55 localhost sshd[19363]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:56 localhost sshd[19365]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:15:58 localhost sshd[19367]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:00 localhost sshd[19369]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:01 localhost sshd[19371]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:03 localhost sshd[19373]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:04 localhost sshd[19375]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:06 localhost sshd[19377]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:08 localhost sshd[19379]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:09 localhost sshd[19381]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:11 localhost sshd[19383]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:12 localhost sshd[19385]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:14 localhost sshd[19387]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:16 localhost sshd[19389]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:17 localhost sshd[19391]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:19 localhost sshd[19393]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:20 localhost sshd[19395]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:22 localhost sshd[19397]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:23 localhost sshd[19399]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:25 localhost sshd[19401]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:27 localhost sshd[19403]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:28 localhost sshd[19405]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:30 localhost sshd[19407]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:31 localhost sshd[19409]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:33 localhost sshd[19411]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:34 localhost sshd[19413]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:36 localhost sshd[19415]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:38 localhost sshd[19417]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:39 localhost sshd[19419]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:41 localhost sshd[19421]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:42 localhost sshd[19423]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:44 localhost sshd[19425]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:46 localhost sshd[19427]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:47 localhost sshd[19429]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:49 localhost sshd[19431]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:50 localhost sshd[19433]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:52 localhost sshd[19435]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:54 localhost sshd[19437]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:55 localhost sshd[19439]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:57 localhost sshd[19441]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:16:59 localhost sshd[19443]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:01 localhost sshd[19445]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:02 localhost sshd[19447]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:04 localhost sshd[19449]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:05 localhost sshd[19451]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:07 localhost sshd[19453]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:09 localhost sshd[19455]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:10 localhost sshd[19457]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:12 localhost sshd[19459]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:13 localhost sshd[19461]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:15 localhost sshd[19463]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:16 localhost sshd[19465]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:18 localhost sshd[19467]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:20 localhost sshd[19469]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:21 localhost sshd[19471]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:23 localhost sshd[19473]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:24 localhost sshd[19475]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:26 localhost sshd[19477]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:28 localhost sshd[19479]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:29 localhost sshd[19481]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:31 localhost sshd[19483]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:32 localhost sshd[19485]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:34 localhost sshd[19487]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:36 localhost sshd[19489]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:37 localhost sshd[19491]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:39 localhost sshd[19493]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:40 localhost sshd[19495]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:42 localhost sshd[19497]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:43 localhost sshd[19499]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:45 localhost sshd[19501]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:47 localhost sshd[19503]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:48 localhost sshd[19505]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:50 localhost sshd[19507]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:51 localhost sshd[19509]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:53 localhost sshd[19511]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:54 localhost sshd[19513]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:56 localhost sshd[19515]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:58 localhost sshd[19517]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:17:59 localhost sshd[19519]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:01 localhost sshd[19521]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:03 localhost sshd[19523]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:04 localhost sshd[19525]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:06 localhost sshd[19527]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:08 localhost sshd[19529]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:09 localhost sshd[19531]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:11 localhost sshd[19533]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:12 localhost sshd[19535]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:14 localhost sshd[19537]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:16 localhost sshd[19539]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:17 localhost sshd[19541]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:19 localhost sshd[19543]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:20 localhost sshd[19545]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:22 localhost sshd[19547]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:24 localhost sshd[19549]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:25 localhost sshd[19551]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:27 localhost sshd[19553]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:28 localhost sshd[19555]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:30 localhost sshd[19557]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:31 localhost sshd[19559]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:33 localhost sshd[19561]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:35 localhost sshd[19563]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:36 localhost sshd[19565]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:38 localhost sshd[19567]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:39 localhost sshd[19569]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:41 localhost sshd[19571]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:43 localhost sshd[19573]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:44 localhost sshd[19575]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:46 localhost sshd[19577]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:47 localhost sshd[19579]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:49 localhost sshd[19581]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:51 localhost sshd[19583]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:52 localhost sshd[19585]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:54 localhost sshd[19587]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:55 localhost sshd[19589]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:57 localhost sshd[19591]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:18:59 localhost sshd[19593]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:00 localhost sshd[19595]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:02 localhost sshd[19597]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:03 localhost sshd[19599]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:05 localhost sshd[19601]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:06 localhost sshd[19603]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:08 localhost sshd[19605]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:10 localhost sshd[19607]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:11 localhost sshd[19609]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:13 localhost sshd[19611]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:14 localhost sshd[19613]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:16 localhost sshd[19615]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:18 localhost sshd[19617]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:19 localhost sshd[19619]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:21 localhost sshd[19621]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:22 localhost sshd[19623]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:24 localhost sshd[19625]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:26 localhost sshd[19627]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:27 localhost sshd[19629]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:29 localhost sshd[19631]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:30 localhost sshd[19633]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:32 localhost sshd[19635]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:33 localhost sshd[19637]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:35 localhost sshd[19639]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:37 localhost sshd[19641]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:38 localhost sshd[19643]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:39 localhost sshd[19646]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:39 localhost systemd-logind[763]: New session 10 of user zuul. Nov 28 02:19:39 localhost systemd[1]: Started Session 10 of User zuul. Nov 28 02:19:40 localhost python3[19663]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-f34b-e95a-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:19:40 localhost sshd[19667]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:42 localhost sshd[19670]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:43 localhost python3[19687]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-f34b-e95a-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:19:43 localhost sshd[19690]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:45 localhost sshd[19692]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:46 localhost sshd[19695]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:48 localhost sshd[19697]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:49 localhost sshd[19699]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:51 localhost sshd[19701]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:53 localhost sshd[19703]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:54 localhost sshd[19705]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:56 localhost sshd[19707]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:58 localhost sshd[19709]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:19:59 localhost sshd[19711]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:01 localhost sshd[19713]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:03 localhost sshd[19715]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:04 localhost sshd[19717]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:06 localhost sshd[19719]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:07 localhost sshd[19721]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:09 localhost sshd[19723]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:11 localhost sshd[19725]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:12 localhost sshd[19727]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:13 localhost python3[19744]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False Nov 28 02:20:14 localhost sshd[19746]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:15 localhost sshd[19749]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:16 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:20:17 localhost sshd[19815]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:18 localhost sshd[19879]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:20 localhost sshd[19881]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:22 localhost sshd[19883]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:23 localhost sshd[19889]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:25 localhost sshd[19891]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:26 localhost sshd[19893]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:28 localhost sshd[19895]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:30 localhost sshd[19897]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:31 localhost sshd[19903]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:33 localhost sshd[19905]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:34 localhost sshd[19907]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:36 localhost sshd[19909]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:38 localhost sshd[19915]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:39 localhost sshd[19917]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:41 localhost sshd[19919]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:42 localhost sshd[19921]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:44 localhost sshd[19923]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:46 localhost sshd[19925]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:47 localhost sshd[19927]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:48 localhost python3[19944]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False Nov 28 02:20:49 localhost sshd[19946]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:50 localhost sshd[19949]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:51 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:20:52 localhost sshd[20015]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:53 localhost sshd[20075]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:55 localhost sshd[20077]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:57 localhost sshd[20079]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:20:58 localhost sshd[20081]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:00 localhost sshd[20083]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:01 localhost sshd[20085]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:03 localhost sshd[20087]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:04 localhost sshd[20090]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:06 localhost sshd[20092]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:08 localhost sshd[20094]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:09 localhost python3[20111]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False Nov 28 02:21:09 localhost sshd[20113]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:11 localhost sshd[20116]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:12 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:12 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:13 localhost sshd[20242]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:14 localhost sshd[20302]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:16 localhost sshd[20307]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:17 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:17 localhost sshd[20372]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:19 localhost sshd[20378]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:20 localhost sshd[20437]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:22 localhost sshd[20447]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:24 localhost sshd[20449]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:25 localhost sshd[20451]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:27 localhost sshd[20453]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:28 localhost sshd[20455]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:30 localhost sshd[20457]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:32 localhost sshd[20459]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:33 localhost sshd[20461]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:35 localhost sshd[20463]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:36 localhost sshd[20465]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:38 localhost sshd[20467]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:40 localhost sshd[20485]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:40 localhost python3[20484]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False Nov 28 02:21:41 localhost sshd[20489]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:43 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:43 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:43 localhost sshd[20611]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:44 localhost sshd[20655]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:46 localhost sshd[20678]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:48 localhost sshd[20689]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:48 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:48 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:21:49 localhost sshd[20749]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:51 localhost sshd[20753]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:52 localhost sshd[20817]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:54 localhost sshd[20823]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:55 localhost sshd[20825]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:57 localhost sshd[20827]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:21:59 localhost sshd[20829]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:00 localhost sshd[20831]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:02 localhost sshd[20833]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:03 localhost sshd[20835]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:05 localhost sshd[20837]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:07 localhost sshd[20839]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:08 localhost sshd[20841]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:10 localhost sshd[20843]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:11 localhost sshd[20845]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:12 localhost python3[20862]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False Nov 28 02:22:13 localhost sshd[20864]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:14 localhost sshd[20867]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:15 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:22:15 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:22:16 localhost sshd[20993]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:18 localhost sshd[21054]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:19 localhost sshd[21058]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:22:21 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:22:21 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:22:32 localhost python3[21270]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f34b-e95a-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:23:01 localhost python3[21289]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:23:12 localhost systemd[1]: Started daily update of the root trust anchor for DNSSEC. Nov 28 02:23:22 localhost kernel: SELinux: Converting 503 SID table entries... Nov 28 02:23:22 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:23:22 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:23:22 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:23:22 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:23:22 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:23:22 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:23:22 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:23:24 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=4 res=1 Nov 28 02:23:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:23:24 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:23:24 localhost systemd[1]: Reloading. Nov 28 02:23:24 localhost systemd-rc-local-generator[21947]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:23:24 localhost systemd-sysv-generator[21953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:23:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:23:24 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:23:25 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:23:25 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:23:25 localhost systemd[1]: run-rfe612edd75704cda8f860ad050967a17.service: Deactivated successfully. Nov 28 02:23:25 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:23:25 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 02:23:40 localhost python3[22497]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f34b-e95a-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:23:58 localhost python3[22517]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:24:00 localhost python3[22565]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:24:00 localhost python3[22608]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764314639.8936636-334-43831190965907/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:24:02 localhost python3[22638]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 28 02:24:03 localhost systemd-journald[618]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation. Nov 28 02:24:03 localhost systemd-journald[618]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 02:24:03 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 02:24:03 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 02:24:03 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 02:24:03 localhost python3[22659]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 28 02:24:04 localhost python3[22679]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 28 02:24:04 localhost python3[22699]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 28 02:24:04 localhost python3[22719]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None Nov 28 02:24:07 localhost python3[22739]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:24:07 localhost systemd[1]: Starting LSB: Bring up/down networking... Nov 28 02:24:07 localhost network[22742]: WARN : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 02:24:07 localhost network[22753]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 02:24:07 localhost network[22742]: WARN : [network] 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:07 localhost network[22754]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:07 localhost network[22742]: WARN : [network] It is advised to switch to 'NetworkManager' instead for network management. Nov 28 02:24:07 localhost network[22755]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 02:24:07 localhost NetworkManager[5965]: [1764314647.6575] audit: op="connections-reload" pid=22783 uid=0 result="success" Nov 28 02:24:07 localhost network[22742]: Bringing up loopback interface: [ OK ] Nov 28 02:24:07 localhost NetworkManager[5965]: [1764314647.8433] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22871 uid=0 result="success" Nov 28 02:24:07 localhost network[22742]: Bringing up interface eth0: [ OK ] Nov 28 02:24:07 localhost systemd[1]: Started LSB: Bring up/down networking. Nov 28 02:24:08 localhost python3[22912]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:24:08 localhost systemd[1]: Starting Open vSwitch Database Unit... Nov 28 02:24:08 localhost chown[22916]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory Nov 28 02:24:08 localhost ovs-ctl[22921]: /etc/openvswitch/conf.db does not exist ... (warning). Nov 28 02:24:08 localhost ovs-ctl[22921]: Creating empty database /etc/openvswitch/conf.db [ OK ] Nov 28 02:24:08 localhost ovs-ctl[22921]: Starting ovsdb-server [ OK ] Nov 28 02:24:08 localhost ovs-vsctl[22970]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1 Nov 28 02:24:08 localhost ovs-vsctl[22990]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"62c03cad-89c1-4fd7-973b-8f2a608c71f1\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\"" Nov 28 02:24:08 localhost ovs-ctl[22921]: Configuring Open vSwitch system IDs [ OK ] Nov 28 02:24:08 localhost ovs-ctl[22921]: Enabling remote OVSDB managers [ OK ] Nov 28 02:24:08 localhost ovs-vsctl[22996]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005538515.novalocal Nov 28 02:24:08 localhost systemd[1]: Started Open vSwitch Database Unit. Nov 28 02:24:08 localhost systemd[1]: Starting Open vSwitch Delete Transient Ports... Nov 28 02:24:08 localhost systemd[1]: Finished Open vSwitch Delete Transient Ports. Nov 28 02:24:08 localhost systemd[1]: Starting Open vSwitch Forwarding Unit... Nov 28 02:24:08 localhost kernel: openvswitch: Open vSwitch switching datapath Nov 28 02:24:08 localhost ovs-ctl[23040]: Inserting openvswitch module [ OK ] Nov 28 02:24:08 localhost ovs-ctl[23009]: Starting ovs-vswitchd [ OK ] Nov 28 02:24:08 localhost ovs-vsctl[23058]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005538515.novalocal Nov 28 02:24:08 localhost ovs-ctl[23009]: Enabling remote OVSDB managers [ OK ] Nov 28 02:24:08 localhost systemd[1]: Started Open vSwitch Forwarding Unit. Nov 28 02:24:08 localhost systemd[1]: Starting Open vSwitch... Nov 28 02:24:08 localhost systemd[1]: Finished Open vSwitch. Nov 28 02:24:39 localhost python3[23076]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f34b-e95a-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:24:40 localhost NetworkManager[5965]: [1764314680.1561] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23235 uid=0 result="success" Nov 28 02:24:40 localhost ifup[23236]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:40 localhost ifup[23237]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:40 localhost ifup[23238]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:40 localhost NetworkManager[5965]: [1764314680.1870] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23244 uid=0 result="success" Nov 28 02:24:40 localhost ovs-vsctl[23246]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:f7:e2:83 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex Nov 28 02:24:40 localhost kernel: device ovs-system entered promiscuous mode Nov 28 02:24:40 localhost NetworkManager[5965]: [1764314680.2159] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4) Nov 28 02:24:40 localhost systemd-udevd[23247]: Network interface NamePolicy= disabled on kernel command line. Nov 28 02:24:40 localhost kernel: Timeout policy base is empty Nov 28 02:24:40 localhost kernel: Failed to associated timeout policy `ovs_test_tp' Nov 28 02:24:40 localhost kernel: device br-ex entered promiscuous mode Nov 28 02:24:40 localhost NetworkManager[5965]: [1764314680.2605] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5) Nov 28 02:24:40 localhost NetworkManager[5965]: [1764314680.2860] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23273 uid=0 result="success" Nov 28 02:24:40 localhost NetworkManager[5965]: [1764314680.3064] device (br-ex): carrier: link connected Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.3587] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23302 uid=0 result="success" Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.4002] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23317 uid=0 result="success" Nov 28 02:24:43 localhost NET[23342]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.4875] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed') Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.4983] dhcp4 (eth1): canceled DHCP transaction Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5004] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds) Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5004] dhcp4 (eth1): state changed no lease Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5034] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23351 uid=0 result="success" Nov 28 02:24:43 localhost ifup[23352]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:43 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 28 02:24:43 localhost ifup[23353]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:43 localhost ifup[23355]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:43 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5320] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23369 uid=0 result="success" Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5631] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23379 uid=0 result="success" Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5672] device (eth1): carrier: link connected Nov 28 02:24:43 localhost NetworkManager[5965]: [1764314683.5779] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23388 uid=0 result="success" Nov 28 02:24:43 localhost ipv6_wait_tentative[23400]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Nov 28 02:24:44 localhost ipv6_wait_tentative[23405]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.6341] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=23414 uid=0 result="success" Nov 28 02:24:45 localhost ovs-vsctl[23429]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1 Nov 28 02:24:45 localhost kernel: device eth1 entered promiscuous mode Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.7092] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23437 uid=0 result="success" Nov 28 02:24:45 localhost ifup[23438]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:45 localhost ifup[23439]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:45 localhost ifup[23440]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.7431] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=23446 uid=0 result="success" Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.7806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23456 uid=0 result="success" Nov 28 02:24:45 localhost ifup[23457]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:45 localhost ifup[23458]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:45 localhost ifup[23459]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.8040] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23465 uid=0 result="success" Nov 28 02:24:45 localhost ovs-vsctl[23468]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Nov 28 02:24:45 localhost kernel: device vlan21 entered promiscuous mode Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.8403] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/6) Nov 28 02:24:45 localhost systemd-udevd[23470]: Network interface NamePolicy= disabled on kernel command line. Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.8596] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23479 uid=0 result="success" Nov 28 02:24:45 localhost NetworkManager[5965]: [1764314685.8745] device (vlan21): carrier: link connected Nov 28 02:24:48 localhost NetworkManager[5965]: [1764314688.9223] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23508 uid=0 result="success" Nov 28 02:24:48 localhost NetworkManager[5965]: [1764314688.9752] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23523 uid=0 result="success" Nov 28 02:24:49 localhost NetworkManager[5965]: [1764314689.0409] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23544 uid=0 result="success" Nov 28 02:24:49 localhost ifup[23545]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:49 localhost ifup[23546]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:49 localhost ifup[23547]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:49 localhost NetworkManager[5965]: [1764314689.1074] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23553 uid=0 result="success" Nov 28 02:24:49 localhost ovs-vsctl[23556]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Nov 28 02:24:49 localhost kernel: device vlan23 entered promiscuous mode Nov 28 02:24:49 localhost NetworkManager[5965]: [1764314689.1521] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/7) Nov 28 02:24:49 localhost systemd-udevd[23558]: Network interface NamePolicy= disabled on kernel command line. Nov 28 02:24:49 localhost NetworkManager[5965]: [1764314689.1806] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23568 uid=0 result="success" Nov 28 02:24:49 localhost NetworkManager[5965]: [1764314689.2009] device (vlan23): carrier: link connected Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.2546] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23598 uid=0 result="success" Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.3031] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23613 uid=0 result="success" Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.3650] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23634 uid=0 result="success" Nov 28 02:24:52 localhost ifup[23635]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:52 localhost ifup[23636]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:52 localhost ifup[23637]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.3960] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23643 uid=0 result="success" Nov 28 02:24:52 localhost ovs-vsctl[23646]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Nov 28 02:24:52 localhost kernel: device vlan20 entered promiscuous mode Nov 28 02:24:52 localhost systemd-udevd[23648]: Network interface NamePolicy= disabled on kernel command line. Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.4366] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/8) Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.4635] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23658 uid=0 result="success" Nov 28 02:24:52 localhost NetworkManager[5965]: [1764314692.4839] device (vlan20): carrier: link connected Nov 28 02:24:53 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.5335] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23688 uid=0 result="success" Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.5785] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23703 uid=0 result="success" Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.6403] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23724 uid=0 result="success" Nov 28 02:24:55 localhost ifup[23725]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:55 localhost ifup[23726]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:55 localhost ifup[23727]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.6734] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23733 uid=0 result="success" Nov 28 02:24:55 localhost ovs-vsctl[23736]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Nov 28 02:24:55 localhost kernel: device vlan22 entered promiscuous mode Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.7113] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/9) Nov 28 02:24:55 localhost systemd-udevd[23738]: Network interface NamePolicy= disabled on kernel command line. Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.7379] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23748 uid=0 result="success" Nov 28 02:24:55 localhost NetworkManager[5965]: [1764314695.7595] device (vlan22): carrier: link connected Nov 28 02:24:58 localhost NetworkManager[5965]: [1764314698.8129] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23778 uid=0 result="success" Nov 28 02:24:58 localhost NetworkManager[5965]: [1764314698.8589] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23793 uid=0 result="success" Nov 28 02:24:58 localhost NetworkManager[5965]: [1764314698.9208] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23814 uid=0 result="success" Nov 28 02:24:58 localhost ifup[23815]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:24:58 localhost ifup[23816]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:24:58 localhost ifup[23817]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:24:58 localhost NetworkManager[5965]: [1764314698.9532] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23823 uid=0 result="success" Nov 28 02:24:58 localhost ovs-vsctl[23826]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Nov 28 02:24:58 localhost NetworkManager[5965]: [1764314698.9939] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/10) Nov 28 02:24:58 localhost kernel: device vlan44 entered promiscuous mode Nov 28 02:24:58 localhost systemd-udevd[23828]: Network interface NamePolicy= disabled on kernel command line. Nov 28 02:24:59 localhost NetworkManager[5965]: [1764314699.0188] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23838 uid=0 result="success" Nov 28 02:24:59 localhost NetworkManager[5965]: [1764314699.0405] device (vlan44): carrier: link connected Nov 28 02:25:02 localhost NetworkManager[5965]: [1764314702.1039] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23868 uid=0 result="success" Nov 28 02:25:02 localhost NetworkManager[5965]: [1764314702.1529] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23883 uid=0 result="success" Nov 28 02:25:02 localhost NetworkManager[5965]: [1764314702.2130] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23904 uid=0 result="success" Nov 28 02:25:02 localhost ifup[23905]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:25:02 localhost ifup[23906]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:25:02 localhost ifup[23907]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:25:02 localhost NetworkManager[5965]: [1764314702.2438] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23913 uid=0 result="success" Nov 28 02:25:02 localhost ovs-vsctl[23916]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal Nov 28 02:25:02 localhost NetworkManager[5965]: [1764314702.3081] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23923 uid=0 result="success" Nov 28 02:25:03 localhost NetworkManager[5965]: [1764314703.3786] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23950 uid=0 result="success" Nov 28 02:25:03 localhost NetworkManager[5965]: [1764314703.4318] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23965 uid=0 result="success" Nov 28 02:25:03 localhost NetworkManager[5965]: [1764314703.4989] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23986 uid=0 result="success" Nov 28 02:25:03 localhost ifup[23987]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:25:03 localhost ifup[23988]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:25:03 localhost ifup[23989]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:25:03 localhost NetworkManager[5965]: [1764314703.5333] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23995 uid=0 result="success" Nov 28 02:25:03 localhost ovs-vsctl[23998]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal Nov 28 02:25:03 localhost NetworkManager[5965]: [1764314703.5977] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=24005 uid=0 result="success" Nov 28 02:25:04 localhost NetworkManager[5965]: [1764314704.6626] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=24033 uid=0 result="success" Nov 28 02:25:04 localhost NetworkManager[5965]: [1764314704.7117] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=24048 uid=0 result="success" Nov 28 02:25:04 localhost NetworkManager[5965]: [1764314704.7676] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=24069 uid=0 result="success" Nov 28 02:25:04 localhost ifup[24070]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:25:04 localhost ifup[24071]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:25:04 localhost ifup[24072]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:25:04 localhost NetworkManager[5965]: [1764314704.7949] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=24078 uid=0 result="success" Nov 28 02:25:04 localhost ovs-vsctl[24081]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal Nov 28 02:25:04 localhost NetworkManager[5965]: [1764314704.8525] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=24088 uid=0 result="success" Nov 28 02:25:05 localhost NetworkManager[5965]: [1764314705.9132] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=24116 uid=0 result="success" Nov 28 02:25:05 localhost NetworkManager[5965]: [1764314705.9580] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=24131 uid=0 result="success" Nov 28 02:25:06 localhost NetworkManager[5965]: [1764314706.0185] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=24152 uid=0 result="success" Nov 28 02:25:06 localhost ifup[24153]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:25:06 localhost ifup[24154]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:25:06 localhost ifup[24155]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:25:06 localhost NetworkManager[5965]: [1764314706.0495] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=24161 uid=0 result="success" Nov 28 02:25:06 localhost ovs-vsctl[24164]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal Nov 28 02:25:06 localhost NetworkManager[5965]: [1764314706.1083] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=24171 uid=0 result="success" Nov 28 02:25:07 localhost NetworkManager[5965]: [1764314707.1695] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=24199 uid=0 result="success" Nov 28 02:25:07 localhost NetworkManager[5965]: [1764314707.2210] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=24214 uid=0 result="success" Nov 28 02:25:07 localhost NetworkManager[5965]: [1764314707.2859] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=24235 uid=0 result="success" Nov 28 02:25:07 localhost ifup[24236]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated. Nov 28 02:25:07 localhost ifup[24237]: 'network-scripts' will be removed from distribution in near future. Nov 28 02:25:07 localhost ifup[24238]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well. Nov 28 02:25:07 localhost NetworkManager[5965]: [1764314707.3194] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=24244 uid=0 result="success" Nov 28 02:25:07 localhost ovs-vsctl[24247]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal Nov 28 02:25:07 localhost NetworkManager[5965]: [1764314707.3789] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=24254 uid=0 result="success" Nov 28 02:25:08 localhost NetworkManager[5965]: [1764314708.4441] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=24282 uid=0 result="success" Nov 28 02:25:08 localhost NetworkManager[5965]: [1764314708.4917] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=24297 uid=0 result="success" Nov 28 02:25:34 localhost python3[24329]: ansible-ansible.legacy.command Invoked with _raw_params=ip a#012ping -c 2 -W 2 192.168.122.10#012ping -c 2 -W 2 192.168.122.11#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f34b-e95a-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:25:38 localhost python3[24348]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:25:38 localhost python3[24364]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:25:40 localhost python3[24378]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:25:40 localhost python3[24394]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None Nov 28 02:25:41 localhost python3[24408]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname Nov 28 02:25:42 localhost python3[24423]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005538515.novalocal"#012hostname_str_array=(${hostname//./ })#012echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f34b-e95a-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:25:43 localhost python3[24443]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)#012hostnamectl hostname "$hostname.localdomain"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f34b-e95a-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:25:43 localhost systemd[1]: Starting Hostname Service... Nov 28 02:25:43 localhost systemd[1]: Started Hostname Service. Nov 28 02:25:43 localhost systemd-hostnamed[24447]: Hostname set to (static) Nov 28 02:25:43 localhost NetworkManager[5965]: [1764314743.1007] hostname: static hostname changed from "np0005538515.novalocal" to "np0005538515.localdomain" Nov 28 02:25:43 localhost systemd[1]: Starting Network Manager Script Dispatcher Service... Nov 28 02:25:43 localhost systemd[1]: Started Network Manager Script Dispatcher Service. Nov 28 02:25:44 localhost systemd-logind[763]: Session 10 logged out. Waiting for processes to exit. Nov 28 02:25:44 localhost systemd[1]: session-10.scope: Deactivated successfully. Nov 28 02:25:44 localhost systemd[1]: session-10.scope: Consumed 1min 43.779s CPU time. Nov 28 02:25:44 localhost systemd-logind[763]: Removed session 10. Nov 28 02:25:47 localhost sshd[24458]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:25:47 localhost systemd-logind[763]: New session 11 of user zuul. Nov 28 02:25:47 localhost systemd[1]: Started Session 11 of User zuul. Nov 28 02:25:47 localhost python3[24475]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Nov 28 02:25:49 localhost systemd[1]: session-11.scope: Deactivated successfully. Nov 28 02:25:49 localhost systemd-logind[763]: Session 11 logged out. Waiting for processes to exit. Nov 28 02:25:49 localhost systemd-logind[763]: Removed session 11. Nov 28 02:25:53 localhost systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully. Nov 28 02:26:13 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 28 02:26:38 localhost sshd[24480]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:26:39 localhost systemd-logind[763]: New session 12 of user zuul. Nov 28 02:26:39 localhost systemd[1]: Started Session 12 of User zuul. Nov 28 02:26:39 localhost python3[24499]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:26:42 localhost systemd[1]: Reloading. Nov 28 02:26:42 localhost systemd-sysv-generator[24541]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:26:42 localhost systemd-rc-local-generator[24537]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:26:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:26:43 localhost systemd[1]: Listening on Device-mapper event daemon FIFOs. Nov 28 02:26:43 localhost systemd[1]: Reloading. Nov 28 02:26:43 localhost systemd-rc-local-generator[24579]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:26:43 localhost systemd-sysv-generator[24585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:26:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:26:43 localhost systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... Nov 28 02:26:43 localhost systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. Nov 28 02:26:43 localhost systemd[1]: Reloading. Nov 28 02:26:43 localhost systemd-sysv-generator[24624]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:26:43 localhost systemd-rc-local-generator[24620]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:26:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:26:43 localhost systemd[1]: Listening on LVM2 poll daemon socket. Nov 28 02:26:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:26:43 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:26:44 localhost systemd[1]: Reloading. Nov 28 02:26:44 localhost systemd-sysv-generator[24685]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:26:44 localhost systemd-rc-local-generator[24679]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:26:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:26:44 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:26:44 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:26:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:26:44 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:26:44 localhost systemd[1]: run-r6b732527809749d4b4619d874cb61790.service: Deactivated successfully. Nov 28 02:26:44 localhost systemd[1]: run-r25d053d67fa2446ea87d51c0ee8e2bf0.service: Deactivated successfully. Nov 28 02:27:45 localhost systemd[1]: session-12.scope: Deactivated successfully. Nov 28 02:27:45 localhost systemd[1]: session-12.scope: Consumed 4.561s CPU time. Nov 28 02:27:45 localhost systemd-logind[763]: Session 12 logged out. Waiting for processes to exit. Nov 28 02:27:45 localhost systemd-logind[763]: Removed session 12. Nov 28 02:39:24 localhost sshd[25274]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:39:25 localhost sshd[25275]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:39:43 localhost sshd[25276]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:39:44 localhost sshd[25278]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:43:42 localhost sshd[25281]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:43:42 localhost systemd-logind[763]: New session 13 of user zuul. Nov 28 02:43:42 localhost systemd[1]: Started Session 13 of User zuul. Nov 28 02:43:43 localhost python3[25329]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 02:43:45 localhost python3[25416]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:43:48 localhost python3[25433]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:43:49 localhost python3[25449]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:43:49 localhost kernel: loop: module loaded Nov 28 02:43:49 localhost kernel: loop3: detected capacity change from 0 to 14680064 Nov 28 02:43:49 localhost python3[25474]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:43:49 localhost lvm[25477]: PV /dev/loop3 not used. Nov 28 02:43:49 localhost lvm[25479]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 28 02:43:49 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0. Nov 28 02:43:49 localhost lvm[25482]: 1 logical volume(s) in volume group "ceph_vg0" now active Nov 28 02:43:49 localhost lvm[25490]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 28 02:43:49 localhost lvm[25490]: VG ceph_vg0 finished Nov 28 02:43:49 localhost systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully. Nov 28 02:43:50 localhost python3[25539]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:43:50 localhost python3[25582]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764315830.059363-54710-33954886908959/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:43:51 localhost python3[25612]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:43:51 localhost systemd[1]: Reloading. Nov 28 02:43:51 localhost systemd-rc-local-generator[25637]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:43:51 localhost systemd-sysv-generator[25640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:43:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:43:51 localhost systemd[1]: Starting Ceph OSD losetup... Nov 28 02:43:51 localhost bash[25652]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img) Nov 28 02:43:51 localhost systemd[1]: Finished Ceph OSD losetup. Nov 28 02:43:52 localhost lvm[25653]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 28 02:43:52 localhost lvm[25653]: VG ceph_vg0 finished Nov 28 02:43:52 localhost python3[25669]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:43:55 localhost python3[25686]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:43:56 localhost python3[25702]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:43:56 localhost kernel: loop4: detected capacity change from 0 to 14680064 Nov 28 02:43:56 localhost python3[25724]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:43:56 localhost lvm[25727]: PV /dev/loop4 not used. Nov 28 02:43:56 localhost lvm[25729]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 28 02:43:56 localhost systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1. Nov 28 02:43:56 localhost lvm[25736]: 1 logical volume(s) in volume group "ceph_vg1" now active Nov 28 02:43:56 localhost lvm[25740]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 28 02:43:56 localhost lvm[25740]: VG ceph_vg1 finished Nov 28 02:43:56 localhost systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully. Nov 28 02:43:57 localhost python3[25788]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:43:57 localhost python3[25831]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764315837.2653916-54794-28444591396235/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:43:58 localhost python3[25861]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:43:58 localhost systemd[1]: Reloading. Nov 28 02:43:58 localhost systemd-sysv-generator[25894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:43:58 localhost systemd-rc-local-generator[25887]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:43:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:43:58 localhost systemd[1]: Starting Ceph OSD losetup... Nov 28 02:43:58 localhost bash[25902]: /dev/loop4: [64516]:9169890 (/var/lib/ceph-osd-1.img) Nov 28 02:43:58 localhost systemd[1]: Finished Ceph OSD losetup. Nov 28 02:43:59 localhost lvm[25903]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 28 02:43:59 localhost lvm[25903]: VG ceph_vg1 finished Nov 28 02:44:07 localhost python3[25948]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Nov 28 02:44:08 localhost python3[25968]: ansible-hostname Invoked with name=np0005538515.localdomain use=None Nov 28 02:44:08 localhost systemd[1]: Starting Hostname Service... Nov 28 02:44:08 localhost systemd[1]: Started Hostname Service. Nov 28 02:44:11 localhost python3[25991]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Nov 28 02:44:12 localhost python3[26039]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.vmihhaiutmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:44:13 localhost python3[26069]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.vmihhaiutmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:44:13 localhost python3[26085]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.vmihhaiutmphosts insertbefore=BOF block=192.168.122.106 np0005538513.localdomain np0005538513#012192.168.122.106 np0005538513.ctlplane.localdomain np0005538513.ctlplane#012192.168.122.107 np0005538514.localdomain np0005538514#012192.168.122.107 np0005538514.ctlplane.localdomain np0005538514.ctlplane#012192.168.122.108 np0005538515.localdomain np0005538515#012192.168.122.108 np0005538515.ctlplane.localdomain np0005538515.ctlplane#012192.168.122.103 np0005538510.localdomain np0005538510#012192.168.122.103 np0005538510.ctlplane.localdomain np0005538510.ctlplane#012192.168.122.104 np0005538511.localdomain np0005538511#012192.168.122.104 np0005538511.ctlplane.localdomain np0005538511.ctlplane#012192.168.122.105 np0005538512.localdomain np0005538512#012192.168.122.105 np0005538512.ctlplane.localdomain np0005538512.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:44:14 localhost python3[26101]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.vmihhaiutmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:44:14 localhost python3[26118]: ansible-file Invoked with path=/tmp/ansible.vmihhaiutmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:44:16 localhost python3[26134]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:44:18 localhost python3[26152]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:44:22 localhost python3[26201]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:44:22 localhost python3[26246]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764315861.9310894-55738-203382693845929/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:44:24 localhost python3[26276]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:44:24 localhost python3[26294]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:44:24 localhost systemd[1]: Stopping NTP client/server... Nov 28 02:44:24 localhost chronyd[765]: chronyd exiting Nov 28 02:44:24 localhost systemd[1]: chronyd.service: Deactivated successfully. Nov 28 02:44:24 localhost systemd[1]: Stopped NTP client/server. Nov 28 02:44:24 localhost systemd[1]: chronyd.service: Consumed 101ms CPU time, read 1.9M from disk, written 4.0K to disk. Nov 28 02:44:24 localhost systemd[1]: Starting NTP client/server... Nov 28 02:44:24 localhost chronyd[26301]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 28 02:44:24 localhost chronyd[26301]: Frequency -30.402 +/- 0.257 ppm read from /var/lib/chrony/drift Nov 28 02:44:24 localhost chronyd[26301]: Loaded seccomp filter (level 2) Nov 28 02:44:24 localhost systemd[1]: Started NTP client/server. Nov 28 02:44:26 localhost python3[26350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:44:27 localhost python3[26393]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764315866.3296187-55886-87452685903154/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:44:27 localhost python3[26423]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:44:27 localhost systemd[1]: Reloading. Nov 28 02:44:27 localhost systemd-rc-local-generator[26445]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:44:27 localhost systemd-sysv-generator[26451]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:44:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:44:27 localhost systemd[1]: Reloading. Nov 28 02:44:27 localhost systemd-rc-local-generator[26488]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:44:27 localhost systemd-sysv-generator[26492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:44:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:44:28 localhost systemd[1]: Starting chronyd online sources service... Nov 28 02:44:28 localhost chronyc[26499]: 200 OK Nov 28 02:44:28 localhost systemd[1]: chrony-online.service: Deactivated successfully. Nov 28 02:44:28 localhost systemd[1]: Finished chronyd online sources service. Nov 28 02:44:28 localhost python3[26516]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:44:28 localhost chronyd[26301]: System clock was stepped by 0.000000 seconds Nov 28 02:44:29 localhost chronyd[26301]: Selected source 23.133.168.247 (pool.ntp.org) Nov 28 02:44:29 localhost python3[26533]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:44:39 localhost python3[26550]: ansible-timezone Invoked with name=UTC hwclock=None Nov 28 02:44:39 localhost systemd[1]: Starting Time & Date Service... Nov 28 02:44:39 localhost systemd[1]: Started Time & Date Service. Nov 28 02:44:39 localhost systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 28 02:44:41 localhost python3[26572]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:44:41 localhost chronyd[26301]: chronyd exiting Nov 28 02:44:41 localhost systemd[1]: Stopping NTP client/server... Nov 28 02:44:41 localhost systemd[1]: chronyd.service: Deactivated successfully. Nov 28 02:44:41 localhost systemd[1]: Stopped NTP client/server. Nov 28 02:44:41 localhost systemd[1]: Starting NTP client/server... Nov 28 02:44:41 localhost chronyd[26579]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 28 02:44:41 localhost chronyd[26579]: Frequency -30.402 +/- 0.257 ppm read from /var/lib/chrony/drift Nov 28 02:44:41 localhost chronyd[26579]: Loaded seccomp filter (level 2) Nov 28 02:44:41 localhost systemd[1]: Started NTP client/server. Nov 28 02:44:46 localhost chronyd[26579]: Selected source 174.138.193.90 (pool.ntp.org) Nov 28 02:45:09 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 28 02:45:56 localhost sshd[26776]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:50 localhost sshd[26779]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:50 localhost systemd[1]: Created slice User Slice of UID 1002. Nov 28 02:46:50 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Nov 28 02:46:50 localhost systemd-logind[763]: New session 14 of user ceph-admin. Nov 28 02:46:50 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Nov 28 02:46:50 localhost systemd[1]: Starting User Manager for UID 1002... Nov 28 02:46:50 localhost systemd[26783]: Queued start job for default target Main User Target. Nov 28 02:46:50 localhost systemd[26783]: Created slice User Application Slice. Nov 28 02:46:50 localhost systemd[26783]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 02:46:50 localhost systemd[26783]: Started Daily Cleanup of User's Temporary Directories. Nov 28 02:46:50 localhost systemd[26783]: Reached target Paths. Nov 28 02:46:50 localhost systemd[26783]: Reached target Timers. Nov 28 02:46:50 localhost systemd[26783]: Starting D-Bus User Message Bus Socket... Nov 28 02:46:50 localhost systemd[26783]: Starting Create User's Volatile Files and Directories... Nov 28 02:46:50 localhost systemd[26783]: Finished Create User's Volatile Files and Directories. Nov 28 02:46:50 localhost systemd[26783]: Listening on D-Bus User Message Bus Socket. Nov 28 02:46:50 localhost systemd[26783]: Reached target Sockets. Nov 28 02:46:50 localhost systemd[26783]: Reached target Basic System. Nov 28 02:46:50 localhost systemd[26783]: Reached target Main User Target. Nov 28 02:46:50 localhost systemd[26783]: Startup finished in 93ms. Nov 28 02:46:50 localhost systemd[1]: Started User Manager for UID 1002. Nov 28 02:46:50 localhost systemd[1]: Started Session 14 of User ceph-admin. Nov 28 02:46:50 localhost sshd[26798]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:50 localhost systemd-logind[763]: New session 16 of user ceph-admin. Nov 28 02:46:50 localhost systemd[1]: Started Session 16 of User ceph-admin. Nov 28 02:46:50 localhost sshd[26818]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:50 localhost systemd-logind[763]: New session 17 of user ceph-admin. Nov 28 02:46:50 localhost systemd[1]: Started Session 17 of User ceph-admin. Nov 28 02:46:51 localhost sshd[26837]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:51 localhost systemd-logind[763]: New session 18 of user ceph-admin. Nov 28 02:46:51 localhost systemd[1]: Started Session 18 of User ceph-admin. Nov 28 02:46:51 localhost sshd[26856]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:51 localhost systemd-logind[763]: New session 19 of user ceph-admin. Nov 28 02:46:51 localhost systemd[1]: Started Session 19 of User ceph-admin. Nov 28 02:46:51 localhost sshd[26875]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:52 localhost systemd-logind[763]: New session 20 of user ceph-admin. Nov 28 02:46:52 localhost systemd[1]: Started Session 20 of User ceph-admin. Nov 28 02:46:52 localhost sshd[26894]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:52 localhost systemd-logind[763]: New session 21 of user ceph-admin. Nov 28 02:46:52 localhost systemd[1]: Started Session 21 of User ceph-admin. Nov 28 02:46:52 localhost sshd[26913]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:52 localhost systemd-logind[763]: New session 22 of user ceph-admin. Nov 28 02:46:52 localhost systemd[1]: Started Session 22 of User ceph-admin. Nov 28 02:46:53 localhost sshd[26932]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:53 localhost systemd-logind[763]: New session 23 of user ceph-admin. Nov 28 02:46:53 localhost systemd[1]: Started Session 23 of User ceph-admin. Nov 28 02:46:53 localhost sshd[26951]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:53 localhost systemd-logind[763]: New session 24 of user ceph-admin. Nov 28 02:46:53 localhost systemd[1]: Started Session 24 of User ceph-admin. Nov 28 02:46:54 localhost sshd[26968]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:54 localhost systemd-logind[763]: New session 25 of user ceph-admin. Nov 28 02:46:54 localhost systemd[1]: Started Session 25 of User ceph-admin. Nov 28 02:46:54 localhost sshd[26987]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:46:54 localhost systemd-logind[763]: New session 26 of user ceph-admin. Nov 28 02:46:54 localhost systemd[1]: Started Session 26 of User ceph-admin. Nov 28 02:46:54 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:10 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:10 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:11 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:11 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:11 localhost systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 27204 (sysctl) Nov 28 02:47:11 localhost systemd[1]: Mounting Arbitrary Executable File Formats File System... Nov 28 02:47:11 localhost systemd[1]: Mounted Arbitrary Executable File Formats File System. Nov 28 02:47:12 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:12 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:16 localhost kernel: VFS: idmapped mount is not enabled. Nov 28 02:47:39 localhost podman[27341]: Nov 28 02:47:39 localhost podman[27341]: 2025-11-28 07:47:39.163887306 +0000 UTC m=+26.099083256 container create 7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_clarke, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 02:47:39 localhost podman[27341]: 2025-11-28 07:47:13.105405246 +0000 UTC m=+0.040601256 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:47:39 localhost systemd[1]: Created slice Slice /machine. Nov 28 02:47:39 localhost systemd[1]: Started libpod-conmon-7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668.scope. Nov 28 02:47:39 localhost systemd[1]: Started libcrun container. Nov 28 02:47:39 localhost podman[27341]: 2025-11-28 07:47:39.293334426 +0000 UTC m=+26.228530406 container init 7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_clarke, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main) Nov 28 02:47:39 localhost podman[27341]: 2025-11-28 07:47:39.309001477 +0000 UTC m=+26.244197457 container start 7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_clarke, io.buildah.version=1.33.12, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc.) Nov 28 02:47:39 localhost podman[27341]: 2025-11-28 07:47:39.309253794 +0000 UTC m=+26.244449774 container attach 7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_clarke, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Nov 28 02:47:39 localhost silly_clarke[27737]: 167 167 Nov 28 02:47:39 localhost systemd[1]: libpod-7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668.scope: Deactivated successfully. Nov 28 02:47:39 localhost podman[27341]: 2025-11-28 07:47:39.313845826 +0000 UTC m=+26.249041816 container died 7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_clarke, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-type=git, release=553, version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, name=rhceph) Nov 28 02:47:39 localhost podman[27742]: 2025-11-28 07:47:39.412748609 +0000 UTC m=+0.084262976 container remove 7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_clarke, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Nov 28 02:47:39 localhost systemd[1]: libpod-conmon-7b64133b18b07e5ae02e0e6d104e5f51e0ceb1064c9e6f16237231234b65b668.scope: Deactivated successfully. Nov 28 02:47:39 localhost podman[27764]: Nov 28 02:47:39 localhost podman[27764]: 2025-11-28 07:47:39.652199033 +0000 UTC m=+0.077880379 container create dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_edison, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public) Nov 28 02:47:39 localhost systemd[1]: Started libpod-conmon-dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5.scope. Nov 28 02:47:39 localhost systemd[1]: Started libcrun container. Nov 28 02:47:39 localhost podman[27764]: 2025-11-28 07:47:39.621327536 +0000 UTC m=+0.047008902 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:47:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51fef301fe477176060efa587fed16744e0ab56033ce957d65a5da993a654c51/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:47:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51fef301fe477176060efa587fed16744e0ab56033ce957d65a5da993a654c51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:47:39 localhost podman[27764]: 2025-11-28 07:47:39.754370527 +0000 UTC m=+0.180051843 container init dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_edison, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, architecture=x86_64, io.buildah.version=1.33.12, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main) Nov 28 02:47:39 localhost podman[27764]: 2025-11-28 07:47:39.767916443 +0000 UTC m=+0.193597789 container start dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_edison, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 02:47:39 localhost podman[27764]: 2025-11-28 07:47:39.768321135 +0000 UTC m=+0.194002491 container attach dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_edison, RELEASE=main, architecture=x86_64, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7) Nov 28 02:47:40 localhost systemd[1]: var-lib-containers-storage-overlay-f530b521bad0f3c0079403ca7d8e5533435e50290bf26755d51c2e2db81ad6a3-merged.mount: Deactivated successfully. Nov 28 02:47:40 localhost nervous_edison[27779]: [ Nov 28 02:47:40 localhost nervous_edison[27779]: { Nov 28 02:47:40 localhost nervous_edison[27779]: "available": false, Nov 28 02:47:40 localhost nervous_edison[27779]: "ceph_device": false, Nov 28 02:47:40 localhost nervous_edison[27779]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 28 02:47:40 localhost nervous_edison[27779]: "lsm_data": {}, Nov 28 02:47:40 localhost nervous_edison[27779]: "lvs": [], Nov 28 02:47:40 localhost nervous_edison[27779]: "path": "/dev/sr0", Nov 28 02:47:40 localhost nervous_edison[27779]: "rejected_reasons": [ Nov 28 02:47:40 localhost nervous_edison[27779]: "Has a FileSystem", Nov 28 02:47:40 localhost nervous_edison[27779]: "Insufficient space (<5GB)" Nov 28 02:47:40 localhost nervous_edison[27779]: ], Nov 28 02:47:40 localhost nervous_edison[27779]: "sys_api": { Nov 28 02:47:40 localhost nervous_edison[27779]: "actuators": null, Nov 28 02:47:40 localhost nervous_edison[27779]: "device_nodes": "sr0", Nov 28 02:47:40 localhost nervous_edison[27779]: "human_readable_size": "482.00 KB", Nov 28 02:47:40 localhost nervous_edison[27779]: "id_bus": "ata", Nov 28 02:47:40 localhost nervous_edison[27779]: "model": "QEMU DVD-ROM", Nov 28 02:47:40 localhost nervous_edison[27779]: "nr_requests": "2", Nov 28 02:47:40 localhost nervous_edison[27779]: "partitions": {}, Nov 28 02:47:40 localhost nervous_edison[27779]: "path": "/dev/sr0", Nov 28 02:47:40 localhost nervous_edison[27779]: "removable": "1", Nov 28 02:47:40 localhost nervous_edison[27779]: "rev": "2.5+", Nov 28 02:47:40 localhost nervous_edison[27779]: "ro": "0", Nov 28 02:47:40 localhost nervous_edison[27779]: "rotational": "1", Nov 28 02:47:40 localhost nervous_edison[27779]: "sas_address": "", Nov 28 02:47:40 localhost nervous_edison[27779]: "sas_device_handle": "", Nov 28 02:47:40 localhost nervous_edison[27779]: "scheduler_mode": "mq-deadline", Nov 28 02:47:40 localhost nervous_edison[27779]: "sectors": 0, Nov 28 02:47:40 localhost nervous_edison[27779]: "sectorsize": "2048", Nov 28 02:47:40 localhost nervous_edison[27779]: "size": 493568.0, Nov 28 02:47:40 localhost nervous_edison[27779]: "support_discard": "0", Nov 28 02:47:40 localhost nervous_edison[27779]: "type": "disk", Nov 28 02:47:40 localhost nervous_edison[27779]: "vendor": "QEMU" Nov 28 02:47:40 localhost nervous_edison[27779]: } Nov 28 02:47:40 localhost nervous_edison[27779]: } Nov 28 02:47:40 localhost nervous_edison[27779]: ] Nov 28 02:47:40 localhost systemd[1]: libpod-dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5.scope: Deactivated successfully. Nov 28 02:47:40 localhost podman[27764]: 2025-11-28 07:47:40.641713434 +0000 UTC m=+1.067394750 container died dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_edison, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, distribution-scope=public, build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:47:40 localhost systemd[1]: tmp-crun.t6f1SJ.mount: Deactivated successfully. Nov 28 02:47:40 localhost systemd[1]: var-lib-containers-storage-overlay-51fef301fe477176060efa587fed16744e0ab56033ce957d65a5da993a654c51-merged.mount: Deactivated successfully. Nov 28 02:47:40 localhost podman[29165]: 2025-11-28 07:47:40.740433021 +0000 UTC m=+0.087492294 container remove dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_edison, vcs-type=git, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, release=553, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 02:47:40 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:40 localhost systemd[1]: libpod-conmon-dd21b17ad1a756543f87576b7de2d56398d9a3db17b107af621117ad0d0875e5.scope: Deactivated successfully. Nov 28 02:47:41 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:47:41 localhost systemd[1]: systemd-coredump.socket: Deactivated successfully. Nov 28 02:47:41 localhost systemd[1]: Closed Process Core Dump Socket. Nov 28 02:47:41 localhost systemd[1]: Stopping Process Core Dump Socket... Nov 28 02:47:41 localhost systemd[1]: Listening on Process Core Dump Socket. Nov 28 02:47:41 localhost systemd[1]: Reloading. Nov 28 02:47:41 localhost systemd-sysv-generator[29251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:47:41 localhost systemd-rc-local-generator[29247]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:47:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:47:41 localhost systemd[1]: Reloading. Nov 28 02:47:41 localhost systemd-sysv-generator[29291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:47:41 localhost systemd-rc-local-generator[29287]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:47:41 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:10 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:48:10 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:48:10 localhost podman[29389]: Nov 28 02:48:10 localhost podman[29389]: 2025-11-28 07:48:10.473086117 +0000 UTC m=+0.079644212 container create 9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elbakyan, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main) Nov 28 02:48:10 localhost systemd[1]: Started libpod-conmon-9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372.scope. Nov 28 02:48:10 localhost podman[29389]: 2025-11-28 07:48:10.441960235 +0000 UTC m=+0.048518360 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:10 localhost systemd[1]: Started libcrun container. Nov 28 02:48:10 localhost podman[29389]: 2025-11-28 07:48:10.561759596 +0000 UTC m=+0.168317691 container init 9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elbakyan, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, GIT_CLEAN=True, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Nov 28 02:48:10 localhost podman[29389]: 2025-11-28 07:48:10.57205674 +0000 UTC m=+0.178614835 container start 9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elbakyan, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, distribution-scope=public, ceph=True, version=7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:10 localhost podman[29389]: 2025-11-28 07:48:10.572445437 +0000 UTC m=+0.179003582 container attach 9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elbakyan, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 02:48:10 localhost keen_elbakyan[29402]: 167 167 Nov 28 02:48:10 localhost systemd[1]: libpod-9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372.scope: Deactivated successfully. Nov 28 02:48:10 localhost podman[29389]: 2025-11-28 07:48:10.578571447 +0000 UTC m=+0.185129542 container died 9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elbakyan, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 02:48:11 localhost systemd[1]: var-lib-containers-storage-overlay-5d53415e905b49cffeb8993230a20691ddd8960a0df1ff6da5cf9753165f609c-merged.mount: Deactivated successfully. Nov 28 02:48:11 localhost podman[29407]: 2025-11-28 07:48:11.833994498 +0000 UTC m=+1.243548659 container remove 9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_elbakyan, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, ceph=True, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Nov 28 02:48:11 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:48:11 localhost systemd[1]: libpod-conmon-9d56a61ce61c3bc4880e2ce0743d47f131b229cb331de2af35d8700265884372.scope: Deactivated successfully. Nov 28 02:48:11 localhost systemd[1]: Reloading. Nov 28 02:48:12 localhost systemd-rc-local-generator[29449]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:12 localhost systemd-sysv-generator[29452]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:12 localhost systemd[1]: Reloading. Nov 28 02:48:12 localhost systemd-sysv-generator[29487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:12 localhost systemd-rc-local-generator[29481]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:12 localhost systemd[1]: Reached target All Ceph clusters and services. Nov 28 02:48:12 localhost systemd[1]: Reloading. Nov 28 02:48:12 localhost systemd-sysv-generator[29522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:12 localhost systemd-rc-local-generator[29518]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:12 localhost systemd[1]: Reached target Ceph cluster 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 02:48:12 localhost systemd[1]: Reloading. Nov 28 02:48:12 localhost systemd-sysv-generator[29565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:12 localhost systemd-rc-local-generator[29559]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:12 localhost systemd[1]: Reloading. Nov 28 02:48:12 localhost systemd-rc-local-generator[29604]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:13 localhost systemd-sysv-generator[29608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:13 localhost systemd[1]: Created slice Slice /system/ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 02:48:13 localhost systemd[1]: Reached target System Time Set. Nov 28 02:48:13 localhost systemd[1]: Reached target System Time Synchronized. Nov 28 02:48:13 localhost systemd[1]: Starting Ceph crash.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 02:48:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:48:13 localhost systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully. Nov 28 02:48:13 localhost podman[29665]: Nov 28 02:48:13 localhost podman[29665]: 2025-11-28 07:48:13.495529981 +0000 UTC m=+0.076723123 container create 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 02:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ce2bad17e13efb10b638a9b81cc8ffa6a43ae6257dd0423ab5beb4a935d7f3/merged/etc/ceph/ceph.client.crash.np0005538515.keyring supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ce2bad17e13efb10b638a9b81cc8ffa6a43ae6257dd0423ab5beb4a935d7f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:13 localhost podman[29665]: 2025-11-28 07:48:13.465623773 +0000 UTC m=+0.046816945 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:13 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3ce2bad17e13efb10b638a9b81cc8ffa6a43ae6257dd0423ab5beb4a935d7f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:13 localhost podman[29665]: 2025-11-28 07:48:13.579310314 +0000 UTC m=+0.160503486 container init 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, version=7) Nov 28 02:48:13 localhost podman[29665]: 2025-11-28 07:48:13.589920162 +0000 UTC m=+0.171113324 container start 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.buildah.version=1.33.12, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 02:48:13 localhost bash[29665]: 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d Nov 28 02:48:13 localhost systemd[1]: Started Ceph crash.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: INFO:ceph-crash:pinging cluster to exercise our key Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.762+0000 7f11df21e640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.762+0000 7f11df21e640 -1 AuthRegistry(0x7f11d80680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.763+0000 7f11df21e640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.763+0000 7f11df21e640 -1 AuthRegistry(0x7f11df21d000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.774+0000 7f11dcf93640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.776+0000 7f11dd794640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.777+0000 7f11d7fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1] Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: 2025-11-28T07:48:13.777+0000 7f11df21e640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: [errno 13] RADOS permission denied (error connecting to the cluster) Nov 28 02:48:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515[29679]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s Nov 28 02:48:17 localhost podman[29765]: Nov 28 02:48:17 localhost podman[29765]: 2025-11-28 07:48:17.408389404 +0000 UTC m=+0.081165419 container create c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_allen, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_BRANCH=main, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph) Nov 28 02:48:17 localhost systemd[1]: Started libpod-conmon-c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657.scope. Nov 28 02:48:17 localhost systemd[1]: Started libcrun container. Nov 28 02:48:17 localhost podman[29765]: 2025-11-28 07:48:17.371793061 +0000 UTC m=+0.044569066 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:17 localhost podman[29765]: 2025-11-28 07:48:17.480059394 +0000 UTC m=+0.152835339 container init c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_allen, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, architecture=x86_64, io.buildah.version=1.33.12, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Nov 28 02:48:17 localhost podman[29765]: 2025-11-28 07:48:17.49177677 +0000 UTC m=+0.164552735 container start c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_allen, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., distribution-scope=public, release=553, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-09-24T08:57:55) Nov 28 02:48:17 localhost podman[29765]: 2025-11-28 07:48:17.492042842 +0000 UTC m=+0.164818807 container attach c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_allen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Nov 28 02:48:17 localhost keen_allen[29780]: 167 167 Nov 28 02:48:17 localhost systemd[1]: libpod-c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657.scope: Deactivated successfully. Nov 28 02:48:17 localhost podman[29765]: 2025-11-28 07:48:17.497403508 +0000 UTC m=+0.170179513 container died c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_allen, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, release=553, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 02:48:17 localhost systemd[1]: var-lib-containers-storage-overlay-2780e6983e60978dd49c412a9a7135caf1c4d430e106a13fd88da61daa6257f0-merged.mount: Deactivated successfully. Nov 28 02:48:17 localhost podman[29785]: 2025-11-28 07:48:17.584222995 +0000 UTC m=+0.073705581 container remove c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_allen, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, com.redhat.component=rhceph-container, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 02:48:17 localhost systemd[1]: libpod-conmon-c6b228b50906280576b074d98e9317e4d670f297b58a2386edf39d3386190657.scope: Deactivated successfully. Nov 28 02:48:17 localhost podman[29804]: Nov 28 02:48:17 localhost podman[29804]: 2025-11-28 07:48:17.790720137 +0000 UTC m=+0.072787850 container create 7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_turing, architecture=x86_64, ceph=True, vcs-type=git, release=553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=) Nov 28 02:48:17 localhost systemd[1]: Started libpod-conmon-7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4.scope. Nov 28 02:48:17 localhost systemd[1]: Started libcrun container. Nov 28 02:48:17 localhost podman[29804]: 2025-11-28 07:48:17.762705642 +0000 UTC m=+0.044773365 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455d9d65b846604f148f74071d448755b10b434220828e4798aec310d34864f7/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455d9d65b846604f148f74071d448755b10b434220828e4798aec310d34864f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455d9d65b846604f148f74071d448755b10b434220828e4798aec310d34864f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455d9d65b846604f148f74071d448755b10b434220828e4798aec310d34864f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:17 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/455d9d65b846604f148f74071d448755b10b434220828e4798aec310d34864f7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:17 localhost podman[29804]: 2025-11-28 07:48:17.921783395 +0000 UTC m=+0.203851108 container init 7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_turing, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, version=7, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=) Nov 28 02:48:17 localhost podman[29804]: 2025-11-28 07:48:17.93233036 +0000 UTC m=+0.214398083 container start 7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_turing, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True) Nov 28 02:48:17 localhost podman[29804]: 2025-11-28 07:48:17.93256391 +0000 UTC m=+0.214631623 container attach 7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_turing, io.buildah.version=1.33.12, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 02:48:18 localhost cranky_turing[29819]: --> passed data devices: 0 physical, 2 LVM Nov 28 02:48:18 localhost cranky_turing[29819]: --> relative data size: 1.0 Nov 28 02:48:18 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 28 02:48:18 localhost cranky_turing[29819]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b9cdd064-f06d-4e2b-b6e3-0368d5f01fb9 Nov 28 02:48:19 localhost lvm[29873]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 28 02:48:19 localhost lvm[29873]: VG ceph_vg0 finished Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1 Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0 Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap Nov 28 02:48:19 localhost cranky_turing[29819]: stderr: got monmap epoch 3 Nov 28 02:48:19 localhost cranky_turing[29819]: --> Creating keyring file for osd.1 Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/ Nov 28 02:48:19 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid b9cdd064-f06d-4e2b-b6e3-0368d5f01fb9 --setuser ceph --setgroup ceph Nov 28 02:48:22 localhost cranky_turing[29819]: stderr: 2025-11-28T07:48:19.678+0000 7fc2cdc8ba80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Nov 28 02:48:22 localhost cranky_turing[29819]: stderr: 2025-11-28T07:48:19.679+0000 7fc2cdc8ba80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid Nov 28 02:48:22 localhost cranky_turing[29819]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Nov 28 02:48:22 localhost cranky_turing[29819]: --> ceph-volume lvm activate successful for osd ID: 1 Nov 28 02:48:22 localhost cranky_turing[29819]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f4f9cdb9-a7e9-468b-968c-003e9ca341ca Nov 28 02:48:22 localhost lvm[30810]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 28 02:48:22 localhost lvm[30810]: VG ceph_vg1 finished Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-authtool --gen-print-key Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:22 localhost cranky_turing[29819]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap Nov 28 02:48:23 localhost cranky_turing[29819]: stderr: got monmap epoch 3 Nov 28 02:48:23 localhost cranky_turing[29819]: --> Creating keyring file for osd.4 Nov 28 02:48:23 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring Nov 28 02:48:23 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/ Nov 28 02:48:23 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid f4f9cdb9-a7e9-468b-968c-003e9ca341ca --setuser ceph --setgroup ceph Nov 28 02:48:26 localhost cranky_turing[29819]: stderr: 2025-11-28T07:48:23.539+0000 7f86f3786a80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3] Nov 28 02:48:26 localhost cranky_turing[29819]: stderr: 2025-11-28T07:48:23.539+0000 7f86f3786a80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid Nov 28 02:48:26 localhost cranky_turing[29819]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1 Nov 28 02:48:26 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Nov 28 02:48:26 localhost cranky_turing[29819]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config Nov 28 02:48:26 localhost cranky_turing[29819]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:26 localhost cranky_turing[29819]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:26 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 28 02:48:26 localhost cranky_turing[29819]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Nov 28 02:48:26 localhost cranky_turing[29819]: --> ceph-volume lvm activate successful for osd ID: 4 Nov 28 02:48:26 localhost cranky_turing[29819]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1 Nov 28 02:48:26 localhost systemd[1]: libpod-7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4.scope: Deactivated successfully. Nov 28 02:48:26 localhost systemd[1]: libpod-7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4.scope: Consumed 3.855s CPU time. Nov 28 02:48:26 localhost podman[31714]: 2025-11-28 07:48:26.274300235 +0000 UTC m=+0.062849072 container died 7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_turing, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-type=git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7) Nov 28 02:48:26 localhost systemd[1]: var-lib-containers-storage-overlay-455d9d65b846604f148f74071d448755b10b434220828e4798aec310d34864f7-merged.mount: Deactivated successfully. Nov 28 02:48:26 localhost podman[31714]: 2025-11-28 07:48:26.316255725 +0000 UTC m=+0.104804532 container remove 7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_turing, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public) Nov 28 02:48:26 localhost systemd[1]: libpod-conmon-7bdc0e75f65c183b634305c5c46ce63fea2326490d73604fba351170fe28fcd4.scope: Deactivated successfully. Nov 28 02:48:27 localhost podman[31799]: Nov 28 02:48:27 localhost podman[31799]: 2025-11-28 07:48:27.112327896 +0000 UTC m=+0.094865293 container create 50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_ride, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, vendor=Red Hat, Inc.) Nov 28 02:48:27 localhost podman[31799]: 2025-11-28 07:48:27.046837639 +0000 UTC m=+0.029375066 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:27 localhost systemd[1]: Started libpod-conmon-50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe.scope. Nov 28 02:48:27 localhost systemd[1]: Started libcrun container. Nov 28 02:48:27 localhost podman[31799]: 2025-11-28 07:48:27.190076344 +0000 UTC m=+0.172613751 container init 50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_ride, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, release=553, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 02:48:27 localhost podman[31799]: 2025-11-28 07:48:27.199843824 +0000 UTC m=+0.182381231 container start 50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_ride, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, io.openshift.expose-services=, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 02:48:27 localhost podman[31799]: 2025-11-28 07:48:27.200069854 +0000 UTC m=+0.182607251 container attach 50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_ride, CEPH_POINT_RELEASE=, release=553, distribution-scope=public, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Nov 28 02:48:27 localhost pedantic_ride[31814]: 167 167 Nov 28 02:48:27 localhost systemd[1]: libpod-50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe.scope: Deactivated successfully. Nov 28 02:48:27 localhost podman[31799]: 2025-11-28 07:48:27.205252462 +0000 UTC m=+0.187789869 container died 50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_ride, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, version=7, io.openshift.expose-services=) Nov 28 02:48:27 localhost systemd[1]: var-lib-containers-storage-overlay-f9bba7074b839a20091fc69eda4f3c450e3b267591c00dcc10cb073b32735acd-merged.mount: Deactivated successfully. Nov 28 02:48:27 localhost podman[31819]: 2025-11-28 07:48:27.305351225 +0000 UTC m=+0.087780151 container remove 50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_ride, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.component=rhceph-container, release=553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph) Nov 28 02:48:27 localhost systemd[1]: libpod-conmon-50e39805a9f4a6cf794a341dcc923da550d5b3684e3ca526749397e693cf14fe.scope: Deactivated successfully. Nov 28 02:48:27 localhost podman[31841]: Nov 28 02:48:27 localhost podman[31841]: 2025-11-28 07:48:27.53710167 +0000 UTC m=+0.078988032 container create 2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_jepsen, vcs-type=git, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, distribution-scope=public, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 02:48:27 localhost systemd[1]: Started libpod-conmon-2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b.scope. Nov 28 02:48:27 localhost systemd[1]: Started libcrun container. Nov 28 02:48:27 localhost podman[31841]: 2025-11-28 07:48:27.505807791 +0000 UTC m=+0.047694123 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71513df3195fd258b0cc3f7b6fb67ebd5c345c99d01ff0b2c312ca316cbe5712/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71513df3195fd258b0cc3f7b6fb67ebd5c345c99d01ff0b2c312ca316cbe5712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71513df3195fd258b0cc3f7b6fb67ebd5c345c99d01ff0b2c312ca316cbe5712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:27 localhost podman[31841]: 2025-11-28 07:48:27.644737336 +0000 UTC m=+0.186623628 container init 2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_jepsen, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True) Nov 28 02:48:27 localhost podman[31841]: 2025-11-28 07:48:27.65595209 +0000 UTC m=+0.197838432 container start 2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_jepsen, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.expose-services=, version=7, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.component=rhceph-container) Nov 28 02:48:27 localhost podman[31841]: 2025-11-28 07:48:27.656254403 +0000 UTC m=+0.198140705 container attach 2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_jepsen, io.buildah.version=1.33.12, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Nov 28 02:48:27 localhost recursing_jepsen[31856]: { Nov 28 02:48:27 localhost recursing_jepsen[31856]: "1": [ Nov 28 02:48:27 localhost recursing_jepsen[31856]: { Nov 28 02:48:27 localhost recursing_jepsen[31856]: "devices": [ Nov 28 02:48:27 localhost recursing_jepsen[31856]: "/dev/loop3" Nov 28 02:48:27 localhost recursing_jepsen[31856]: ], Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_name": "ceph_lv0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_path": "/dev/ceph_vg0/ceph_lv0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_size": "7511998464", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=3749M0-boz5-Yk5R-nRdT-Yezs-CSPm-tBY4Hg,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2c5417c9-00eb-57d5-a565-ddecbc7995c1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=b9cdd064-f06d-4e2b-b6e3-0368d5f01fb9,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_uuid": "3749M0-boz5-Yk5R-nRdT-Yezs-CSPm-tBY4Hg", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "name": "ceph_lv0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "path": "/dev/ceph_vg0/ceph_lv0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "tags": { Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.block_device": "/dev/ceph_vg0/ceph_lv0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.block_uuid": "3749M0-boz5-Yk5R-nRdT-Yezs-CSPm-tBY4Hg", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.cephx_lockbox_secret": "", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.cluster_fsid": "2c5417c9-00eb-57d5-a565-ddecbc7995c1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.cluster_name": "ceph", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.crush_device_class": "", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.encrypted": "0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.osd_fsid": "b9cdd064-f06d-4e2b-b6e3-0368d5f01fb9", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.osd_id": "1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.osdspec_affinity": "default_drive_group", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.type": "block", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.vdo": "0" Nov 28 02:48:27 localhost recursing_jepsen[31856]: }, Nov 28 02:48:27 localhost recursing_jepsen[31856]: "type": "block", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "vg_name": "ceph_vg0" Nov 28 02:48:27 localhost recursing_jepsen[31856]: } Nov 28 02:48:27 localhost recursing_jepsen[31856]: ], Nov 28 02:48:27 localhost recursing_jepsen[31856]: "4": [ Nov 28 02:48:27 localhost recursing_jepsen[31856]: { Nov 28 02:48:27 localhost recursing_jepsen[31856]: "devices": [ Nov 28 02:48:27 localhost recursing_jepsen[31856]: "/dev/loop4" Nov 28 02:48:27 localhost recursing_jepsen[31856]: ], Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_name": "ceph_lv1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_path": "/dev/ceph_vg1/ceph_lv1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_size": "7511998464", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=bsBjq9-gtPd-H3kW-i3Gz-qY9o-NJ4N-eJqMew,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=2c5417c9-00eb-57d5-a565-ddecbc7995c1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=f4f9cdb9-a7e9-468b-968c-003e9ca341ca,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "lv_uuid": "bsBjq9-gtPd-H3kW-i3Gz-qY9o-NJ4N-eJqMew", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "name": "ceph_lv1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "path": "/dev/ceph_vg1/ceph_lv1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "tags": { Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.block_device": "/dev/ceph_vg1/ceph_lv1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.block_uuid": "bsBjq9-gtPd-H3kW-i3Gz-qY9o-NJ4N-eJqMew", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.cephx_lockbox_secret": "", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.cluster_fsid": "2c5417c9-00eb-57d5-a565-ddecbc7995c1", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.cluster_name": "ceph", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.crush_device_class": "", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.encrypted": "0", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.osd_fsid": "f4f9cdb9-a7e9-468b-968c-003e9ca341ca", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.osd_id": "4", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.osdspec_affinity": "default_drive_group", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.type": "block", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "ceph.vdo": "0" Nov 28 02:48:27 localhost recursing_jepsen[31856]: }, Nov 28 02:48:27 localhost recursing_jepsen[31856]: "type": "block", Nov 28 02:48:27 localhost recursing_jepsen[31856]: "vg_name": "ceph_vg1" Nov 28 02:48:27 localhost recursing_jepsen[31856]: } Nov 28 02:48:27 localhost recursing_jepsen[31856]: ] Nov 28 02:48:27 localhost recursing_jepsen[31856]: } Nov 28 02:48:28 localhost podman[31841]: 2025-11-28 07:48:28.032674846 +0000 UTC m=+0.574561178 container died 2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_jepsen, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 02:48:28 localhost systemd[1]: libpod-2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b.scope: Deactivated successfully. Nov 28 02:48:28 localhost podman[31865]: 2025-11-28 07:48:28.129037094 +0000 UTC m=+0.083867548 container remove 2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_jepsen, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.openshift.expose-services=, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 02:48:28 localhost systemd[1]: libpod-conmon-2d4fc26160d6c9818d2904060878812ca68e37be5114c21d4032ec109785cc3b.scope: Deactivated successfully. Nov 28 02:48:28 localhost systemd[1]: var-lib-containers-storage-overlay-71513df3195fd258b0cc3f7b6fb67ebd5c345c99d01ff0b2c312ca316cbe5712-merged.mount: Deactivated successfully. Nov 28 02:48:28 localhost podman[31951]: Nov 28 02:48:28 localhost podman[31951]: 2025-11-28 07:48:28.95264703 +0000 UTC m=+0.070178314 container create cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_burnell, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.33.12, vcs-type=git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:28 localhost systemd[1]: Started libpod-conmon-cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2.scope. Nov 28 02:48:28 localhost systemd[1]: tmp-crun.L2DiVa.mount: Deactivated successfully. Nov 28 02:48:29 localhost systemd[1]: Started libcrun container. Nov 28 02:48:29 localhost podman[31951]: 2025-11-28 07:48:28.921728427 +0000 UTC m=+0.039259711 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:29 localhost podman[31951]: 2025-11-28 07:48:29.033780457 +0000 UTC m=+0.151311751 container init cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_burnell, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, release=553, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, description=Red Hat Ceph Storage 7) Nov 28 02:48:29 localhost intelligent_burnell[31966]: 167 167 Nov 28 02:48:29 localhost podman[31951]: 2025-11-28 07:48:29.046376961 +0000 UTC m=+0.163908245 container start cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_burnell, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, GIT_BRANCH=main) Nov 28 02:48:29 localhost podman[31951]: 2025-11-28 07:48:29.049140313 +0000 UTC m=+0.166671637 container attach cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_burnell, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, io.openshift.expose-services=, architecture=x86_64, build-date=2025-09-24T08:57:55, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7) Nov 28 02:48:29 localhost systemd[1]: libpod-cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2.scope: Deactivated successfully. Nov 28 02:48:29 localhost podman[31951]: 2025-11-28 07:48:29.05450831 +0000 UTC m=+0.172039674 container died cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_burnell, distribution-scope=public, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.33.12, name=rhceph) Nov 28 02:48:29 localhost podman[31971]: 2025-11-28 07:48:29.137192555 +0000 UTC m=+0.073293812 container remove cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_burnell, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-type=git, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, release=553, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 02:48:29 localhost systemd[1]: libpod-conmon-cd84d20f0f178406a752f72ee48881f9871fba58ac6896853be57396ee7944b2.scope: Deactivated successfully. Nov 28 02:48:29 localhost systemd[1]: var-lib-containers-storage-overlay-3fbcda35e0867fad2a4d13583fa5a7f0f3690a680d1b01bacceab37549aa5cf0-merged.mount: Deactivated successfully. Nov 28 02:48:29 localhost podman[31998]: Nov 28 02:48:29 localhost podman[31998]: 2025-11-28 07:48:29.474878981 +0000 UTC m=+0.070462977 container create 65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=553, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 02:48:29 localhost systemd[1]: Started libpod-conmon-65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3.scope. Nov 28 02:48:29 localhost systemd[1]: Started libcrun container. Nov 28 02:48:29 localhost podman[31998]: 2025-11-28 07:48:29.448263028 +0000 UTC m=+0.043847034 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a49cef492c3a27374e6497a362c48df357f95ca21b4005bed80a217740eb9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a49cef492c3a27374e6497a362c48df357f95ca21b4005bed80a217740eb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a49cef492c3a27374e6497a362c48df357f95ca21b4005bed80a217740eb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a49cef492c3a27374e6497a362c48df357f95ca21b4005bed80a217740eb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5a49cef492c3a27374e6497a362c48df357f95ca21b4005bed80a217740eb9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:29 localhost podman[31998]: 2025-11-28 07:48:29.606973583 +0000 UTC m=+0.202557559 container init 65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, release=553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55) Nov 28 02:48:29 localhost podman[31998]: 2025-11-28 07:48:29.618087433 +0000 UTC m=+0.213671439 container start 65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=7, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True) Nov 28 02:48:29 localhost podman[31998]: 2025-11-28 07:48:29.618425368 +0000 UTC m=+0.214009334 container attach 65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , release=553, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test[32014]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Nov 28 02:48:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test[32014]: [--no-systemd] [--no-tmpfs] Nov 28 02:48:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test[32014]: ceph-volume activate: error: unrecognized arguments: --bad-option Nov 28 02:48:29 localhost systemd[1]: libpod-65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3.scope: Deactivated successfully. Nov 28 02:48:29 localhost podman[31998]: 2025-11-28 07:48:29.868514463 +0000 UTC m=+0.464098479 container died 65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True) Nov 28 02:48:29 localhost podman[32019]: 2025-11-28 07:48:29.966335605 +0000 UTC m=+0.084886643 container remove 65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate-test, ceph=True, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public) Nov 28 02:48:29 localhost systemd-journald[618]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Nov 28 02:48:29 localhost systemd-journald[618]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 02:48:29 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 02:48:29 localhost systemd[1]: libpod-conmon-65c73b9721994000d168180c60470cdca21dbbcae5fa11640fd254f1724a2fc3.scope: Deactivated successfully. Nov 28 02:48:30 localhost systemd[1]: Reloading. Nov 28 02:48:30 localhost systemd-rc-local-generator[32076]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:30 localhost systemd-sysv-generator[32081]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:30 localhost systemd[1]: var-lib-containers-storage-overlay-9c5a49cef492c3a27374e6497a362c48df357f95ca21b4005bed80a217740eb9-merged.mount: Deactivated successfully. Nov 28 02:48:30 localhost systemd[1]: tmp-crun.GUfD5n.mount: Deactivated successfully. Nov 28 02:48:30 localhost systemd[1]: Reloading. Nov 28 02:48:30 localhost systemd-sysv-generator[32123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:30 localhost systemd-rc-local-generator[32120]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:30 localhost systemd[1]: Starting Ceph osd.1 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 02:48:31 localhost podman[32184]: Nov 28 02:48:31 localhost podman[32184]: 2025-11-28 07:48:31.121004585 +0000 UTC m=+0.065763421 container create a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate, release=553, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, RELEASE=main, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.openshift.expose-services=, ceph=True, architecture=x86_64) Nov 28 02:48:31 localhost systemd[1]: Started libcrun container. Nov 28 02:48:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680a283c51576e3d8382da9b68b17d4695f7c95f0855e7c1eaaee75775f0b2b/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:31 localhost podman[32184]: 2025-11-28 07:48:31.096804887 +0000 UTC m=+0.041563683 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680a283c51576e3d8382da9b68b17d4695f7c95f0855e7c1eaaee75775f0b2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680a283c51576e3d8382da9b68b17d4695f7c95f0855e7c1eaaee75775f0b2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680a283c51576e3d8382da9b68b17d4695f7c95f0855e7c1eaaee75775f0b2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0680a283c51576e3d8382da9b68b17d4695f7c95f0855e7c1eaaee75775f0b2b/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:31 localhost podman[32184]: 2025-11-28 07:48:31.25224722 +0000 UTC m=+0.197006046 container init a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Nov 28 02:48:31 localhost podman[32184]: 2025-11-28 07:48:31.261717617 +0000 UTC m=+0.206476433 container start a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, ceph=True, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 02:48:31 localhost podman[32184]: 2025-11-28 07:48:31.262098794 +0000 UTC m=+0.206857790 container attach a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc.) Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Nov 28 02:48:31 localhost bash[32184]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Nov 28 02:48:31 localhost bash[32184]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0 Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Nov 28 02:48:31 localhost bash[32184]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0 Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 28 02:48:31 localhost bash[32184]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:31 localhost bash[32184]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Nov 28 02:48:31 localhost bash[32184]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1 Nov 28 02:48:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate[32198]: --> ceph-volume raw activate successful for osd ID: 1 Nov 28 02:48:31 localhost bash[32184]: --> ceph-volume raw activate successful for osd ID: 1 Nov 28 02:48:31 localhost systemd[1]: libpod-a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7.scope: Deactivated successfully. Nov 28 02:48:31 localhost podman[32184]: 2025-11-28 07:48:31.992766383 +0000 UTC m=+0.937525269 container died a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True) Nov 28 02:48:32 localhost systemd[1]: tmp-crun.7iDOwr.mount: Deactivated successfully. Nov 28 02:48:32 localhost systemd[1]: var-lib-containers-storage-overlay-0680a283c51576e3d8382da9b68b17d4695f7c95f0855e7c1eaaee75775f0b2b-merged.mount: Deactivated successfully. Nov 28 02:48:32 localhost podman[32312]: 2025-11-28 07:48:32.084239055 +0000 UTC m=+0.080689158 container remove a85a25c20a9bb47c4374b03b9bde22be8316131b9ebacee9600413c6598ff7b7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1-activate, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Nov 28 02:48:32 localhost podman[32374]: Nov 28 02:48:32 localhost podman[32374]: 2025-11-28 07:48:32.402222352 +0000 UTC m=+0.072113660 container create d6143f5c4d0edf8e88527410f707a68007ec2434660a8ed1dd820cec36e66cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, distribution-scope=public, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-type=git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=) Nov 28 02:48:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259010bf69d409df7f88bdd32b0016120bbdb0cbdd301173dd40dd864993b89a/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259010bf69d409df7f88bdd32b0016120bbdb0cbdd301173dd40dd864993b89a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:32 localhost podman[32374]: 2025-11-28 07:48:32.373550818 +0000 UTC m=+0.043442136 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259010bf69d409df7f88bdd32b0016120bbdb0cbdd301173dd40dd864993b89a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259010bf69d409df7f88bdd32b0016120bbdb0cbdd301173dd40dd864993b89a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/259010bf69d409df7f88bdd32b0016120bbdb0cbdd301173dd40dd864993b89a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:32 localhost podman[32374]: 2025-11-28 07:48:32.508356301 +0000 UTC m=+0.178247619 container init d6143f5c4d0edf8e88527410f707a68007ec2434660a8ed1dd820cec36e66cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, version=7) Nov 28 02:48:32 localhost podman[32374]: 2025-11-28 07:48:32.514907009 +0000 UTC m=+0.184798317 container start d6143f5c4d0edf8e88527410f707a68007ec2434660a8ed1dd820cec36e66cc3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.expose-services=, vendor=Red Hat, Inc., release=553, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph) Nov 28 02:48:32 localhost bash[32374]: d6143f5c4d0edf8e88527410f707a68007ec2434660a8ed1dd820cec36e66cc3 Nov 28 02:48:32 localhost systemd[1]: Started Ceph osd.1 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 02:48:32 localhost ceph-osd[32393]: set uid:gid to 167:167 (ceph:ceph) Nov 28 02:48:32 localhost ceph-osd[32393]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Nov 28 02:48:32 localhost ceph-osd[32393]: pidfile_write: ignore empty --pid-file Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a452e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a452e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a452e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:32 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:32 localhost ceph-osd[32393]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) close Nov 28 02:48:32 localhost ceph-osd[32393]: bdev(0x55ab8a452e00 /var/lib/ceph/osd/ceph-1/block) close Nov 28 02:48:33 localhost ceph-osd[32393]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal Nov 28 02:48:33 localhost ceph-osd[32393]: load: jerasure load: lrc Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) close Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) close Nov 28 02:48:33 localhost podman[32486]: Nov 28 02:48:33 localhost podman[32486]: 2025-11-28 07:48:33.397755987 +0000 UTC m=+0.075601383 container create fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_sammet, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, vcs-type=git) Nov 28 02:48:33 localhost systemd[1]: Started libpod-conmon-fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d.scope. Nov 28 02:48:33 localhost systemd[1]: Started libcrun container. Nov 28 02:48:33 localhost podman[32486]: 2025-11-28 07:48:33.36583898 +0000 UTC m=+0.043684396 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:33 localhost podman[32486]: 2025-11-28 07:48:33.47177076 +0000 UTC m=+0.149616156 container init fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_sammet, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 02:48:33 localhost podman[32486]: 2025-11-28 07:48:33.481147963 +0000 UTC m=+0.158993359 container start fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_sammet, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc.) Nov 28 02:48:33 localhost podman[32486]: 2025-11-28 07:48:33.481420284 +0000 UTC m=+0.159265680 container attach fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_sammet, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, release=553, RELEASE=main, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 02:48:33 localhost systemd[1]: libpod-fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d.scope: Deactivated successfully. Nov 28 02:48:33 localhost magical_sammet[32505]: 167 167 Nov 28 02:48:33 localhost podman[32486]: 2025-11-28 07:48:33.485345988 +0000 UTC m=+0.163191384 container died fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_sammet, name=rhceph, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 02:48:33 localhost podman[32510]: 2025-11-28 07:48:33.583426742 +0000 UTC m=+0.084921456 container remove fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_sammet, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, ceph=True) Nov 28 02:48:33 localhost systemd[1]: libpod-conmon-fce79ddbfb4ca7305887e3b30a10f282f004f484f956cd29730228c506a2678d.scope: Deactivated successfully. Nov 28 02:48:33 localhost ceph-osd[32393]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Nov 28 02:48:33 localhost ceph-osd[32393]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs mount Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs mount shared_bdev_used = 0 Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: RocksDB version: 7.9.2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Git sha 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: DB SUMMARY Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: DB Session ID: QREFVIKU9CR5RBF8LMCG Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: CURRENT file: CURRENT Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: IDENTITY file: IDENTITY Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.error_if_exists: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.create_if_missing: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.env: 0x55ab8a6e7180 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.fs: LegacyFileSystem Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.info_log: 0x55ab8b3d6500 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_file_opening_threads: 16 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.statistics: (nil) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_fsync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_log_file_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.log_file_time_to_roll: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.keep_log_file_num: 1000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.recycle_log_file_num: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_fallocate: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_mmap_reads: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_mmap_writes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_direct_reads: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.create_missing_column_families: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.db_log_dir: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_dir: db.wal Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_cache_numshardbits: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.advise_random_on_open: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.db_write_buffer_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_manager: 0x55ab8a43d4a0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_adaptive_mutex: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.rate_limiter: (nil) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_recovery_mode: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_thread_tracking: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_pipelined_write: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.unordered_write: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.row_cache: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_ingest_behind: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.two_write_queues: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.manual_wal_flush: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_compression: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.atomic_flush: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.persist_stats_to_disk: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.log_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.best_efforts_recovery: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_data_in_errors: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.db_host_id: __hostname__ Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enforce_single_del_contracts: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_background_jobs: 4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_background_compactions: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_subcompactions: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.delayed_write_rate : 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.stats_dump_period_sec: 600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.stats_persist_period_sec: 600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_open_files: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bytes_per_sync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_background_flushes: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Compression algorithms supported: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kZSTD supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kXpressCompression supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kBZip2Compression supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kLZ4Compression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kZlibCompression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kSnappyCompression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: DMutex implementation: pthread_mutex_t Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d66c0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d68e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d68e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d68e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4ac62bd1-f8fe-41aa-94df-6dbfb775e8d7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113682013, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113682381, "job": 1, "event": "recovery_finished"} Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025 Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240 Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000 Nov 28 02:48:33 localhost ceph-osd[32393]: freelist init Nov 28 02:48:33 localhost ceph-osd[32393]: freelist _read_cfg Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs umount Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) close Nov 28 02:48:33 localhost podman[32733]: Nov 28 02:48:33 localhost podman[32733]: 2025-11-28 07:48:33.933846839 +0000 UTC m=+0.073386576 container create 6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument Nov 28 02:48:33 localhost ceph-osd[32393]: bdev(0x55ab8a453500 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs mount Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 28 02:48:33 localhost ceph-osd[32393]: bluefs mount shared_bdev_used = 4718592 Nov 28 02:48:33 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: RocksDB version: 7.9.2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Git sha 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: DB SUMMARY Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: DB Session ID: QREFVIKU9CR5RBF8LMCH Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: CURRENT file: CURRENT Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: IDENTITY file: IDENTITY Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.error_if_exists: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.create_if_missing: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.env: 0x55ab8b524850 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.fs: LegacyFileSystem Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.info_log: 0x55ab8b44d040 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_file_opening_threads: 16 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.statistics: (nil) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_fsync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_log_file_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.log_file_time_to_roll: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.keep_log_file_num: 1000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.recycle_log_file_num: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_fallocate: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_mmap_reads: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_mmap_writes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_direct_reads: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.create_missing_column_families: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.db_log_dir: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_dir: db.wal Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_cache_numshardbits: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.advise_random_on_open: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.db_write_buffer_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_manager: 0x55ab8a43d5e0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.use_adaptive_mutex: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.rate_limiter: (nil) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_recovery_mode: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_thread_tracking: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_pipelined_write: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.unordered_write: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.row_cache: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_ingest_behind: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.two_write_queues: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.manual_wal_flush: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_compression: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.atomic_flush: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.persist_stats_to_disk: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.log_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.best_efforts_recovery: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.allow_data_in_errors: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.db_host_id: __hostname__ Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enforce_single_del_contracts: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_background_jobs: 4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_background_compactions: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_subcompactions: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.delayed_write_rate : 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.stats_dump_period_sec: 600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.stats_persist_period_sec: 600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_open_files: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bytes_per_sync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_background_flushes: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Compression algorithms supported: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kZSTD supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kXpressCompression supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kBZip2Compression supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kLZ4Compression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kZlibCompression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: #011kSnappyCompression supported: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: DMutex implementation: pthread_mutex_t Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b44c460)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d6c40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost systemd[1]: Started libpod-conmon-6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460.scope. Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d6c40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.merge_operator: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ab8b3d6c40)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x55ab8a42b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression: LZ4 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.num_levels: 7 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:33 localhost ceph-osd[32393]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 4ac62bd1-f8fe-41aa-94df-6dbfb775e8d7 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113970174, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113975468, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764316113, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ac62bd1-f8fe-41aa-94df-6dbfb775e8d7", "db_session_id": "QREFVIKU9CR5RBF8LMCH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113980301, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764316113, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ac62bd1-f8fe-41aa-94df-6dbfb775e8d7", "db_session_id": "QREFVIKU9CR5RBF8LMCH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113986618, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764316113, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "4ac62bd1-f8fe-41aa-94df-6dbfb775e8d7", "db_session_id": "QREFVIKU9CR5RBF8LMCH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316113990625, "job": 1, "event": "recovery_finished"} Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Nov 28 02:48:34 localhost systemd[1]: Started libcrun container. Nov 28 02:48:34 localhost podman[32733]: 2025-11-28 07:48:33.906861989 +0000 UTC m=+0.046401736 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9401ce940ded6b98d412edb363f866bebab3d8561836e2537140b8a1e691e9/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55ab8a452e00 Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: DB pointer 0x55ab8a485a00 Nov 28 02:48:34 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 28 02:48:34 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4 Nov 28 02:48:34 localhost ceph-osd[32393]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 02:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012 Nov 28 02:48:34 localhost ceph-osd[32393]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Nov 28 02:48:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9401ce940ded6b98d412edb363f866bebab3d8561836e2537140b8a1e691e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:34 localhost ceph-osd[32393]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Nov 28 02:48:34 localhost ceph-osd[32393]: _get_class not permitted to load lua Nov 28 02:48:34 localhost ceph-osd[32393]: _get_class not permitted to load sdk Nov 28 02:48:34 localhost ceph-osd[32393]: _get_class not permitted to load test_remote_reads Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 load_pgs Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 load_pgs opened 0 pgs Nov 28 02:48:34 localhost ceph-osd[32393]: osd.1 0 log_to_monitors true Nov 28 02:48:34 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1[32389]: 2025-11-28T07:48:34.026+0000 7fc674863a80 -1 osd.1 0 log_to_monitors true Nov 28 02:48:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9401ce940ded6b98d412edb363f866bebab3d8561836e2537140b8a1e691e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9401ce940ded6b98d412edb363f866bebab3d8561836e2537140b8a1e691e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de9401ce940ded6b98d412edb363f866bebab3d8561836e2537140b8a1e691e9/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:34 localhost systemd[1]: var-lib-containers-storage-overlay-7570757b21089630af044dff18f5069d9af04d63f9ed03a218b84dd47e0eefd5-merged.mount: Deactivated successfully. Nov 28 02:48:34 localhost podman[32733]: 2025-11-28 07:48:34.072573193 +0000 UTC m=+0.212112900 container init 6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph) Nov 28 02:48:34 localhost podman[32733]: 2025-11-28 07:48:34.08019753 +0000 UTC m=+0.219737267 container start 6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph) Nov 28 02:48:34 localhost podman[32733]: 2025-11-28 07:48:34.080677601 +0000 UTC m=+0.220217398 container attach 6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-type=git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, ceph=True, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, release=553, GIT_BRANCH=main) Nov 28 02:48:34 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test[32930]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID] Nov 28 02:48:34 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test[32930]: [--no-systemd] [--no-tmpfs] Nov 28 02:48:34 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test[32930]: ceph-volume activate: error: unrecognized arguments: --bad-option Nov 28 02:48:34 localhost systemd[1]: libpod-6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460.scope: Deactivated successfully. Nov 28 02:48:34 localhost podman[32733]: 2025-11-28 07:48:34.297085931 +0000 UTC m=+0.436625708 container died 6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:34 localhost systemd[1]: var-lib-containers-storage-overlay-de9401ce940ded6b98d412edb363f866bebab3d8561836e2537140b8a1e691e9-merged.mount: Deactivated successfully. Nov 28 02:48:34 localhost podman[32968]: 2025-11-28 07:48:34.395412675 +0000 UTC m=+0.085222298 container remove 6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate-test, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 02:48:34 localhost systemd[1]: libpod-conmon-6d4d7da991c4f1b78c12e591d1ce5dfbe35a960096156af77a75d9f2508bb460.scope: Deactivated successfully. Nov 28 02:48:34 localhost systemd[1]: Reloading. Nov 28 02:48:34 localhost systemd-sysv-generator[33025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:34 localhost systemd-rc-local-generator[33022]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:34 localhost systemd[1]: Reloading. Nov 28 02:48:35 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Nov 28 02:48:35 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Nov 28 02:48:35 localhost systemd-rc-local-generator[33060]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:48:35 localhost systemd-sysv-generator[33066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:48:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:48:35 localhost systemd[1]: Starting Ceph osd.4 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 done with init, starting boot process Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 start_boot Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1 Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Nov 28 02:48:35 localhost ceph-osd[32393]: osd.1 0 bench count 12288000 bsize 4 KiB Nov 28 02:48:35 localhost podman[33126]: Nov 28 02:48:35 localhost podman[33126]: 2025-11-28 07:48:35.629972046 +0000 UTC m=+0.107730550 container create 04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux , version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 02:48:35 localhost podman[33126]: 2025-11-28 07:48:35.573740467 +0000 UTC m=+0.051498991 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:35 localhost systemd[1]: tmp-crun.Zm04Qz.mount: Deactivated successfully. Nov 28 02:48:35 localhost systemd[1]: Started libcrun container. Nov 28 02:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ed51b53f87dacb1240adb098301a374c5a6dad4454e524b8f4efa738f7998/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ed51b53f87dacb1240adb098301a374c5a6dad4454e524b8f4efa738f7998/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ed51b53f87dacb1240adb098301a374c5a6dad4454e524b8f4efa738f7998/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ed51b53f87dacb1240adb098301a374c5a6dad4454e524b8f4efa738f7998/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:35 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868ed51b53f87dacb1240adb098301a374c5a6dad4454e524b8f4efa738f7998/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:35 localhost podman[33126]: 2025-11-28 07:48:35.78749451 +0000 UTC m=+0.265253004 container init 04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_BRANCH=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12) Nov 28 02:48:35 localhost podman[33126]: 2025-11-28 07:48:35.797050211 +0000 UTC m=+0.274808715 container start 04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, release=553, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=) Nov 28 02:48:35 localhost podman[33126]: 2025-11-28 07:48:35.797368465 +0000 UTC m=+0.275126999 container attach 04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, architecture=x86_64, release=553, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Nov 28 02:48:36 localhost bash[33126]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Nov 28 02:48:36 localhost bash[33126]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1 Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Nov 28 02:48:36 localhost bash[33126]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1 Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 28 02:48:36 localhost bash[33126]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1 Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:36 localhost bash[33126]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Nov 28 02:48:36 localhost bash[33126]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4 Nov 28 02:48:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate[33140]: --> ceph-volume raw activate successful for osd ID: 4 Nov 28 02:48:36 localhost bash[33126]: --> ceph-volume raw activate successful for osd ID: 4 Nov 28 02:48:36 localhost systemd[1]: libpod-04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de.scope: Deactivated successfully. Nov 28 02:48:36 localhost podman[33126]: 2025-11-28 07:48:36.525870978 +0000 UTC m=+1.003629502 container died 04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, vcs-type=git, distribution-scope=public) Nov 28 02:48:36 localhost systemd[1]: var-lib-containers-storage-overlay-868ed51b53f87dacb1240adb098301a374c5a6dad4454e524b8f4efa738f7998-merged.mount: Deactivated successfully. Nov 28 02:48:36 localhost podman[33257]: 2025-11-28 07:48:36.674662497 +0000 UTC m=+0.136925527 container remove 04f84c1e81dbc9dbf850783ae50fa0ecf301892495e75819575c778b15aa64de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4-activate, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, architecture=x86_64, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, RELEASE=main) Nov 28 02:48:36 localhost podman[33315]: Nov 28 02:48:36 localhost podman[33315]: 2025-11-28 07:48:36.994390621 +0000 UTC m=+0.071332245 container create 606597793ddb0558c53e18a6d88c01aa5ec597833ba8a2079a2c6fce1e6d2c82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, architecture=x86_64, build-date=2025-09-24T08:57:55) Nov 28 02:48:37 localhost systemd[1]: tmp-crun.GIElyk.mount: Deactivated successfully. Nov 28 02:48:37 localhost podman[33315]: 2025-11-28 07:48:36.96851082 +0000 UTC m=+0.045452324 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02176f38795b040d5d85ab09fc76c0bbcbbfc9c3677feaa100178a06e93de861/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02176f38795b040d5d85ab09fc76c0bbcbbfc9c3677feaa100178a06e93de861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02176f38795b040d5d85ab09fc76c0bbcbbfc9c3677feaa100178a06e93de861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02176f38795b040d5d85ab09fc76c0bbcbbfc9c3677feaa100178a06e93de861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02176f38795b040d5d85ab09fc76c0bbcbbfc9c3677feaa100178a06e93de861/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:37 localhost podman[33315]: 2025-11-28 07:48:37.154940539 +0000 UTC m=+0.231882033 container init 606597793ddb0558c53e18a6d88c01aa5ec597833ba8a2079a2c6fce1e6d2c82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, com.redhat.component=rhceph-container, GIT_CLEAN=True, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, name=rhceph, distribution-scope=public) Nov 28 02:48:37 localhost podman[33315]: 2025-11-28 07:48:37.186806543 +0000 UTC m=+0.263748027 container start 606597793ddb0558c53e18a6d88c01aa5ec597833ba8a2079a2c6fce1e6d2c82 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, build-date=2025-09-24T08:57:55) Nov 28 02:48:37 localhost bash[33315]: 606597793ddb0558c53e18a6d88c01aa5ec597833ba8a2079a2c6fce1e6d2c82 Nov 28 02:48:37 localhost systemd[1]: Started Ceph osd.4 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 02:48:37 localhost ceph-osd[33334]: set uid:gid to 167:167 (ceph:ceph) Nov 28 02:48:37 localhost ceph-osd[33334]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2 Nov 28 02:48:37 localhost ceph-osd[33334]: pidfile_write: ignore empty --pid-file Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:37 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:37 localhost ceph-osd[33334]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) close Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) close Nov 28 02:48:37 localhost ceph-osd[33334]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal Nov 28 02:48:37 localhost ceph-osd[33334]: load: jerasure load: lrc Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:37 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) close Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:37 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:37 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) close Nov 28 02:48:37 localhost podman[33425]: Nov 28 02:48:37 localhost podman[33425]: 2025-11-28 07:48:37.957068697 +0000 UTC m=+0.066193759 container create 49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_hoover, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, name=rhceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:38 localhost systemd[1]: Started libpod-conmon-49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d.scope. Nov 28 02:48:38 localhost systemd[1]: Started libcrun container. Nov 28 02:48:38 localhost podman[33425]: 2025-11-28 07:48:37.92537963 +0000 UTC m=+0.034504722 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:38 localhost ceph-osd[33334]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196 Nov 28 02:48:38 localhost podman[33425]: 2025-11-28 07:48:38.048485297 +0000 UTC m=+0.157610369 container init 49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_hoover, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-09-24T08:57:55, version=7, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a32e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06 Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs mount Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs mount shared_bdev_used = 0 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 28 02:48:38 localhost romantic_hoover[33440]: 167 167 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: RocksDB version: 7.9.2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Git sha 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DB SUMMARY Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DB Session ID: X4M0XD0YLGEXMBVZCYFT Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: CURRENT file: CURRENT Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: IDENTITY file: IDENTITY Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.error_if_exists: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.create_if_missing: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.env: 0x562bf8cc6cb0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.fs: LegacyFileSystem Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.info_log: 0x562bf99ba380 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_file_opening_threads: 16 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.statistics: (nil) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_fsync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_log_file_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.log_file_time_to_roll: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.keep_log_file_num: 1000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.recycle_log_file_num: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_fallocate: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_mmap_reads: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_mmap_writes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_direct_reads: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.create_missing_column_families: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.db_log_dir: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_dir: db.wal Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_cache_numshardbits: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.advise_random_on_open: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.db_write_buffer_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_manager: 0x562bf8a1c140 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_adaptive_mutex: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.rate_limiter: (nil) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_recovery_mode: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_thread_tracking: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_pipelined_write: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.unordered_write: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.row_cache: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_ingest_behind: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.two_write_queues: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.manual_wal_flush: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_compression: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.atomic_flush: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.persist_stats_to_disk: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.log_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.best_efforts_recovery: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_data_in_errors: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.db_host_id: __hostname__ Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enforce_single_del_contracts: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_background_jobs: 4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_background_compactions: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_subcompactions: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.delayed_write_rate : 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.stats_dump_period_sec: 600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.stats_persist_period_sec: 600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_open_files: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bytes_per_sync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_background_flushes: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Compression algorithms supported: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kZSTD supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kXpressCompression supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kBZip2Compression supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kLZ4Compression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kZlibCompression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kSnappyCompression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DMutex implementation: pthread_mutex_t Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost systemd[1]: libpod-49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d.scope: Deactivated successfully. Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba540)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a850#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost podman[33425]: 2025-11-28 07:48:38.085973919 +0000 UTC m=+0.195098981 container start 49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_hoover, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, RELEASE=main, GIT_CLEAN=True, name=rhceph, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, ceph=True) Nov 28 02:48:38 localhost podman[33425]: 2025-11-28 07:48:38.087019295 +0000 UTC m=+0.196144407 container attach 49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_hoover, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, name=rhceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:38 localhost podman[33425]: 2025-11-28 07:48:38.091552405 +0000 UTC m=+0.200677517 container died 49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_hoover, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, build-date=2025-09-24T08:57:55, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main, vcs-type=git) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba760)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba760)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf99ba760)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0e9f3910-9ea8-45ae-a4b3-9f14ef476182 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118079629, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118079874, "job": 1, "event": "recovery_finished"} Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000 Nov 28 02:48:38 localhost ceph-osd[33334]: freelist init Nov 28 02:48:38 localhost ceph-osd[33334]: freelist _read_cfg Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs umount Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) close Nov 28 02:48:38 localhost podman[33504]: 2025-11-28 07:48:38.197391361 +0000 UTC m=+0.115764275 container remove 49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_hoover, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, ceph=True, name=rhceph, release=553, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=) Nov 28 02:48:38 localhost systemd[1]: libpod-conmon-49d3db070436a7bd695838e4be1de6c44cec26f166951c485b33da130554834d.scope: Deactivated successfully. Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument Nov 28 02:48:38 localhost ceph-osd[33334]: bdev(0x562bf8a33180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs mount Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000 Nov 28 02:48:38 localhost ceph-osd[33334]: bluefs mount shared_bdev_used = 4718592 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: RocksDB version: 7.9.2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Git sha 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DB SUMMARY Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DB Session ID: X4M0XD0YLGEXMBVZCYFS Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: CURRENT file: CURRENT Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: IDENTITY file: IDENTITY Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: MANIFEST file: MANIFEST-000032 size: 1007 Bytes Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: SST files in db.slow dir, Total Num: 0, files: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.error_if_exists: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.create_if_missing: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.env: 0x562bf8cc7c00 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.fs: LegacyFileSystem Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.info_log: 0x562bf8acb520 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_file_opening_threads: 16 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.statistics: (nil) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_fsync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_log_file_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.log_file_time_to_roll: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.keep_log_file_num: 1000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.recycle_log_file_num: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_fallocate: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_mmap_reads: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_mmap_writes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_direct_reads: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.create_missing_column_families: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.db_log_dir: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_dir: db.wal Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_cache_numshardbits: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.advise_random_on_open: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.db_write_buffer_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_manager: 0x562bf8a1d540 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.use_adaptive_mutex: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.rate_limiter: (nil) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_recovery_mode: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_thread_tracking: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_pipelined_write: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.unordered_write: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.row_cache: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_ingest_behind: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.two_write_queues: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.manual_wal_flush: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_compression: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.atomic_flush: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.persist_stats_to_disk: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.log_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.best_efforts_recovery: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.allow_data_in_errors: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.db_host_id: __hostname__ Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enforce_single_del_contracts: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_background_jobs: 4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_background_compactions: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_subcompactions: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.writable_file_max_buffer_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.delayed_write_rate : 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_total_wal_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.stats_dump_period_sec: 600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.stats_persist_period_sec: 600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_open_files: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bytes_per_sync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_readahead_size: 2097152 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_background_flushes: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Compression algorithms supported: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kZSTD supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kXpressCompression supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kBZip2Compression supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kLZ4Compression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kZlibCompression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: #011kSnappyCompression supported: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DMutex implementation: pthread_mutex_t Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: .T:int64_array.b:bitwise_xor Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acb3e0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0b610#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 483183820#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acabc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acabc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]: Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.merge_operator: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_filter_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.sst_partitioner_factory: None Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562bf8acabc0)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x562bf8a0a2d0#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.write_buffer_size: 16777216 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number: 64 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression: LZ4 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression: Disabled Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.num_levels: 7 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_write_buffer_number_to_merge: 6 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.level: 32767 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.enabled: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_file_num_compaction_trigger: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_base: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier: 8.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.arena_block_size: 1048576 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_support: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.bloom_locality: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.max_successive_merges: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.force_consistency_checks: 1 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.ttl: 2592000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_files: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.min_blob_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_size: 268435456 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/column_family.cc:635] #011(skipping printing options) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0e9f3910-9ea8-45ae-a4b3-9f14ef476182 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118363303, "job": 1, "event": "recovery_started", "wal_files": [31]} Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118388906, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764316118, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0e9f3910-9ea8-45ae-a4b3-9f14ef476182", "db_session_id": "X4M0XD0YLGEXMBVZCYFS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118417048, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764316118, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0e9f3910-9ea8-45ae-a4b3-9f14ef476182", "db_session_id": "X4M0XD0YLGEXMBVZCYFS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118426342, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764316118, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0e9f3910-9ea8-45ae-a4b3-9f14ef476182", "db_session_id": "X4M0XD0YLGEXMBVZCYFS", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}} Nov 28 02:48:38 localhost podman[33660]: Nov 28 02:48:38 localhost podman[33660]: 2025-11-28 07:48:38.464706324 +0000 UTC m=+0.114714088 container create 607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_lovelace, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, RELEASE=main, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True) Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764316118466362, "job": 1, "event": "recovery_finished"} Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/version_set.cc:5047] Creating manifest 40 Nov 28 02:48:38 localhost podman[33660]: 2025-11-28 07:48:38.405078086 +0000 UTC m=+0.055085880 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562bf8ad0380 Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: DB pointer 0x562bf9911a00 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4 Nov 28 02:48:38 localhost ceph-osd[33334]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 02:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 460.80 MB usag Nov 28 02:48:38 localhost ceph-osd[33334]: /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs Nov 28 02:48:38 localhost ceph-osd[33334]: /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello Nov 28 02:48:38 localhost systemd[1]: Started libpod-conmon-607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470.scope. Nov 28 02:48:38 localhost ceph-osd[33334]: _get_class not permitted to load lua Nov 28 02:48:38 localhost ceph-osd[33334]: _get_class not permitted to load sdk Nov 28 02:48:38 localhost ceph-osd[33334]: _get_class not permitted to load test_remote_reads Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 load_pgs Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 load_pgs opened 0 pgs Nov 28 02:48:38 localhost ceph-osd[33334]: osd.4 0 log_to_monitors true Nov 28 02:48:38 localhost systemd[1]: Started libcrun container. Nov 28 02:48:38 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4[33330]: 2025-11-28T07:48:38.555+0000 7f2b3a71ea80 -1 osd.4 0 log_to_monitors true Nov 28 02:48:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87df368dbed6e848a6f32f85972509b1e344bec02ca59072eaf174c54e6e606/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87df368dbed6e848a6f32f85972509b1e344bec02ca59072eaf174c54e6e606/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f87df368dbed6e848a6f32f85972509b1e344bec02ca59072eaf174c54e6e606/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:38 localhost podman[33660]: 2025-11-28 07:48:38.627343463 +0000 UTC m=+0.277351227 container init 607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_lovelace, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=553, io.buildah.version=1.33.12, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 02:48:38 localhost podman[33660]: 2025-11-28 07:48:38.63816182 +0000 UTC m=+0.288169554 container start 607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_lovelace, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55) Nov 28 02:48:38 localhost podman[33660]: 2025-11-28 07:48:38.638358249 +0000 UTC m=+0.288366063 container attach 607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_lovelace, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main) Nov 28 02:48:38 localhost systemd[1]: var-lib-containers-storage-overlay-769e70b48a1c2c822b927d650341de595d4acfa241ab5da7498791095dcf7b4d-merged.mount: Deactivated successfully. Nov 28 02:48:39 localhost elated_lovelace[33863]: { Nov 28 02:48:39 localhost elated_lovelace[33863]: "b9cdd064-f06d-4e2b-b6e3-0368d5f01fb9": { Nov 28 02:48:39 localhost elated_lovelace[33863]: "ceph_fsid": "2c5417c9-00eb-57d5-a565-ddecbc7995c1", Nov 28 02:48:39 localhost elated_lovelace[33863]: "device": "/dev/mapper/ceph_vg0-ceph_lv0", Nov 28 02:48:39 localhost elated_lovelace[33863]: "osd_id": 1, Nov 28 02:48:39 localhost elated_lovelace[33863]: "osd_uuid": "b9cdd064-f06d-4e2b-b6e3-0368d5f01fb9", Nov 28 02:48:39 localhost elated_lovelace[33863]: "type": "bluestore" Nov 28 02:48:39 localhost elated_lovelace[33863]: }, Nov 28 02:48:39 localhost elated_lovelace[33863]: "f4f9cdb9-a7e9-468b-968c-003e9ca341ca": { Nov 28 02:48:39 localhost elated_lovelace[33863]: "ceph_fsid": "2c5417c9-00eb-57d5-a565-ddecbc7995c1", Nov 28 02:48:39 localhost elated_lovelace[33863]: "device": "/dev/mapper/ceph_vg1-ceph_lv1", Nov 28 02:48:39 localhost elated_lovelace[33863]: "osd_id": 4, Nov 28 02:48:39 localhost elated_lovelace[33863]: "osd_uuid": "f4f9cdb9-a7e9-468b-968c-003e9ca341ca", Nov 28 02:48:39 localhost elated_lovelace[33863]: "type": "bluestore" Nov 28 02:48:39 localhost elated_lovelace[33863]: } Nov 28 02:48:39 localhost elated_lovelace[33863]: } Nov 28 02:48:39 localhost systemd[1]: libpod-607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470.scope: Deactivated successfully. Nov 28 02:48:39 localhost podman[33660]: 2025-11-28 07:48:39.293089971 +0000 UTC m=+0.943097745 container died 607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_lovelace, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 02:48:39 localhost systemd[1]: var-lib-containers-storage-overlay-f87df368dbed6e848a6f32f85972509b1e344bec02ca59072eaf174c54e6e606-merged.mount: Deactivated successfully. Nov 28 02:48:39 localhost podman[33927]: 2025-11-28 07:48:39.404715261 +0000 UTC m=+0.101273025 container remove 607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elated_lovelace, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, name=rhceph, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=) Nov 28 02:48:39 localhost systemd[1]: libpod-conmon-607c4c1d86fe68184ea33fb17f285556f6d07da725a4c8e2b2da08f7dce07470.scope: Deactivated successfully. Nov 28 02:48:39 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : purged_snaps scrub starts Nov 28 02:48:39 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : purged_snaps scrub ok Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.779 iops: 5063.440 elapsed_sec: 0.592 Nov 28 02:48:39 localhost ceph-osd[32393]: log_channel(cluster) log [WRN] : OSD bench result of 5063.439684 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 0 waiting for initial osdmap Nov 28 02:48:39 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1[32389]: 2025-11-28T07:48:39.970+0000 7fc6707e2640 -1 osd.1 0 waiting for initial osdmap Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 crush map has features 288514050185494528, adjusting msgr requires for clients Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 crush map has features 3314932999778484224, adjusting msgr requires for osds Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 set_numa_affinity not setting numa affinity Nov 28 02:48:39 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-1[32389]: 2025-11-28T07:48:39.989+0000 7fc66be0c640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 28 02:48:39 localhost ceph-osd[32393]: osd.1 12 _collect_metadata loop3: no unique device id for loop3: fallback method has no model nor serial Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 done with init, starting boot process Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 start_boot Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1 Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0 Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3 Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10 Nov 28 02:48:40 localhost ceph-osd[33334]: osd.4 0 bench count 12288000 bsize 4 KiB Nov 28 02:48:40 localhost ceph-osd[32393]: osd.1 13 state: booting -> active Nov 28 02:48:41 localhost podman[34052]: 2025-11-28 07:48:41.690900559 +0000 UTC m=+0.108232412 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, version=7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Nov 28 02:48:41 localhost podman[34052]: 2025-11-28 07:48:41.820508513 +0000 UTC m=+0.237840406 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, GIT_CLEAN=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 02:48:42 localhost ceph-osd[32393]: osd.1 15 crush map has features 288514051259236352, adjusting msgr requires for clients Nov 28 02:48:42 localhost ceph-osd[32393]: osd.1 15 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons Nov 28 02:48:42 localhost ceph-osd[32393]: osd.1 15 crush map has features 3314933000852226048, adjusting msgr requires for osds Nov 28 02:48:42 localhost ceph-osd[32393]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 02:48:43 localhost ceph-osd[32393]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 undersized+peered mbc={}] state: react AllReplicasActivated Activating complete Nov 28 02:48:43 localhost podman[34244]: Nov 28 02:48:43 localhost podman[34244]: 2025-11-28 07:48:43.857351459 +0000 UTC m=+0.094432664 container create 3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_chaum, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55) Nov 28 02:48:43 localhost systemd[1]: Started libpod-conmon-3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4.scope. Nov 28 02:48:43 localhost podman[34244]: 2025-11-28 07:48:43.807328334 +0000 UTC m=+0.044409599 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:43 localhost systemd[1]: Started libcrun container. Nov 28 02:48:43 localhost podman[34244]: 2025-11-28 07:48:43.95149392 +0000 UTC m=+0.188575145 container init 3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_chaum, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_BRANCH=main, build-date=2025-09-24T08:57:55) Nov 28 02:48:43 localhost funny_chaum[34259]: 167 167 Nov 28 02:48:43 localhost systemd[1]: libpod-3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4.scope: Deactivated successfully. Nov 28 02:48:43 localhost podman[34244]: 2025-11-28 07:48:43.970322949 +0000 UTC m=+0.207404184 container start 3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_chaum, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553) Nov 28 02:48:43 localhost podman[34244]: 2025-11-28 07:48:43.970644064 +0000 UTC m=+0.207725309 container attach 3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_chaum, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main) Nov 28 02:48:43 localhost podman[34244]: 2025-11-28 07:48:43.973368894 +0000 UTC m=+0.210450159 container died 3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_chaum, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, release=553, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, version=7, com.redhat.component=rhceph-container, distribution-scope=public) Nov 28 02:48:44 localhost systemd[1]: var-lib-containers-storage-overlay-b95aefb47c92c64a11fce9bc4483f344fa2851454bba834b9598b864e3e1f9db-merged.mount: Deactivated successfully. Nov 28 02:48:44 localhost podman[34264]: 2025-11-28 07:48:44.07605092 +0000 UTC m=+0.100686310 container remove 3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_chaum, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, maintainer=Guillaume Abrioux , vcs-type=git, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True) Nov 28 02:48:44 localhost systemd[1]: libpod-conmon-3f8be99dd0c6d201be45dd3a9c2722e314812e74164aeb070f20b0a0247122d4.scope: Deactivated successfully. Nov 28 02:48:44 localhost podman[34284]: Nov 28 02:48:44 localhost podman[34284]: 2025-11-28 07:48:44.27295069 +0000 UTC m=+0.070705728 container create f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_williamson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, release=553, version=7, ceph=True, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=) Nov 28 02:48:44 localhost systemd[1]: Started libpod-conmon-f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2.scope. Nov 28 02:48:44 localhost systemd[1]: Started libcrun container. Nov 28 02:48:44 localhost podman[34284]: 2025-11-28 07:48:44.240760121 +0000 UTC m=+0.038515169 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 02:48:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa3ec5d5d1eafd6ab6d4478cd62a168d8e40c6257e4979f5963c5b04408f26f/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa3ec5d5d1eafd6ab6d4478cd62a168d8e40c6257e4979f5963c5b04408f26f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afa3ec5d5d1eafd6ab6d4478cd62a168d8e40c6257e4979f5963c5b04408f26f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 02:48:44 localhost podman[34284]: 2025-11-28 07:48:44.37595034 +0000 UTC m=+0.173705398 container init f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_williamson, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main) Nov 28 02:48:44 localhost podman[34284]: 2025-11-28 07:48:44.395877148 +0000 UTC m=+0.193632206 container start f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_williamson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, version=7, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, architecture=x86_64) Nov 28 02:48:44 localhost podman[34284]: 2025-11-28 07:48:44.396308558 +0000 UTC m=+0.194063796 container attach f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_williamson, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64) Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.197 iops: 4914.424 elapsed_sec: 0.610 Nov 28 02:48:44 localhost ceph-osd[33334]: log_channel(cluster) log [WRN] : OSD bench result of 4914.423996 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd]. Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 0 waiting for initial osdmap Nov 28 02:48:44 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4[33330]: 2025-11-28T07:48:44.489+0000 7f2b3669d640 -1 osd.4 0 waiting for initial osdmap Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 crush map has features 288514051259236352, adjusting msgr requires for clients Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 crush map has features 3314933000852226048, adjusting msgr requires for osds Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 check_osdmap_features require_osd_release unknown -> reef Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 set_numa_affinity not setting numa affinity Nov 28 02:48:44 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-osd-4[33330]: 2025-11-28T07:48:44.510+0000 7f2b31cc7640 -1 osd.4 17 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory Nov 28 02:48:44 localhost ceph-osd[33334]: osd.4 17 _collect_metadata loop4: no unique device id for loop4: fallback method has no model nor serial Nov 28 02:48:45 localhost ceph-osd[32393]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=18 pruub=14.005291939s) [1,5,3] r=0 lpr=18 pi=[15,18)/0 crt=0'0 mlcod 0'0 peered pruub 25.125785828s@ mbc={}] start_peering_interval up [1] -> [1,5,3], acting [1] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 02:48:45 localhost ceph-osd[32393]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=18 pruub=14.005291939s) [1,5,3] r=0 lpr=18 pi=[15,18)/0 crt=0'0 mlcod 0'0 unknown pruub 25.125785828s@ mbc={}] state: transitioning to Primary Nov 28 02:48:45 localhost ceph-osd[33334]: osd.4 18 state: booting -> active Nov 28 02:48:45 localhost elastic_williamson[34299]: [ Nov 28 02:48:45 localhost elastic_williamson[34299]: { Nov 28 02:48:45 localhost elastic_williamson[34299]: "available": false, Nov 28 02:48:45 localhost elastic_williamson[34299]: "ceph_device": false, Nov 28 02:48:45 localhost elastic_williamson[34299]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 28 02:48:45 localhost elastic_williamson[34299]: "lsm_data": {}, Nov 28 02:48:45 localhost elastic_williamson[34299]: "lvs": [], Nov 28 02:48:45 localhost elastic_williamson[34299]: "path": "/dev/sr0", Nov 28 02:48:45 localhost elastic_williamson[34299]: "rejected_reasons": [ Nov 28 02:48:45 localhost elastic_williamson[34299]: "Has a FileSystem", Nov 28 02:48:45 localhost elastic_williamson[34299]: "Insufficient space (<5GB)" Nov 28 02:48:45 localhost elastic_williamson[34299]: ], Nov 28 02:48:45 localhost elastic_williamson[34299]: "sys_api": { Nov 28 02:48:45 localhost elastic_williamson[34299]: "actuators": null, Nov 28 02:48:45 localhost elastic_williamson[34299]: "device_nodes": "sr0", Nov 28 02:48:45 localhost elastic_williamson[34299]: "human_readable_size": "482.00 KB", Nov 28 02:48:45 localhost elastic_williamson[34299]: "id_bus": "ata", Nov 28 02:48:45 localhost elastic_williamson[34299]: "model": "QEMU DVD-ROM", Nov 28 02:48:45 localhost elastic_williamson[34299]: "nr_requests": "2", Nov 28 02:48:45 localhost elastic_williamson[34299]: "partitions": {}, Nov 28 02:48:45 localhost elastic_williamson[34299]: "path": "/dev/sr0", Nov 28 02:48:45 localhost elastic_williamson[34299]: "removable": "1", Nov 28 02:48:45 localhost elastic_williamson[34299]: "rev": "2.5+", Nov 28 02:48:45 localhost elastic_williamson[34299]: "ro": "0", Nov 28 02:48:45 localhost elastic_williamson[34299]: "rotational": "1", Nov 28 02:48:45 localhost elastic_williamson[34299]: "sas_address": "", Nov 28 02:48:45 localhost elastic_williamson[34299]: "sas_device_handle": "", Nov 28 02:48:45 localhost elastic_williamson[34299]: "scheduler_mode": "mq-deadline", Nov 28 02:48:45 localhost elastic_williamson[34299]: "sectors": 0, Nov 28 02:48:45 localhost elastic_williamson[34299]: "sectorsize": "2048", Nov 28 02:48:45 localhost elastic_williamson[34299]: "size": 493568.0, Nov 28 02:48:45 localhost elastic_williamson[34299]: "support_discard": "0", Nov 28 02:48:45 localhost elastic_williamson[34299]: "type": "disk", Nov 28 02:48:45 localhost elastic_williamson[34299]: "vendor": "QEMU" Nov 28 02:48:45 localhost elastic_williamson[34299]: } Nov 28 02:48:45 localhost elastic_williamson[34299]: } Nov 28 02:48:45 localhost elastic_williamson[34299]: ] Nov 28 02:48:45 localhost systemd[1]: libpod-f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2.scope: Deactivated successfully. Nov 28 02:48:45 localhost systemd[1]: libpod-f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2.scope: Consumed 1.060s CPU time. Nov 28 02:48:45 localhost podman[34284]: 2025-11-28 07:48:45.417731503 +0000 UTC m=+1.215486601 container died f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_williamson, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph) Nov 28 02:48:45 localhost systemd[1]: var-lib-containers-storage-overlay-afa3ec5d5d1eafd6ab6d4478cd62a168d8e40c6257e4979f5963c5b04408f26f-merged.mount: Deactivated successfully. Nov 28 02:48:45 localhost podman[35774]: 2025-11-28 07:48:45.525800547 +0000 UTC m=+0.095558563 container remove f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_williamson, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, distribution-scope=public, release=553, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 02:48:45 localhost systemd[1]: libpod-conmon-f5648193c0da0f8955416bbb58ab546b6b527415067bb6af48164d8787d3e4e2.scope: Deactivated successfully. Nov 28 02:48:46 localhost ceph-osd[32393]: osd.1 pg_epoch: 19 pg[1.0( empty local-lis/les=18/19 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[15,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 02:48:53 localhost systemd[26783]: Starting Mark boot as successful... Nov 28 02:48:53 localhost systemd[26783]: Finished Mark boot as successful. Nov 28 02:48:55 localhost systemd[1]: tmp-crun.eVeOQd.mount: Deactivated successfully. Nov 28 02:48:55 localhost podman[35903]: 2025-11-28 07:48:55.257211929 +0000 UTC m=+0.086794197 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc.) Nov 28 02:48:55 localhost podman[35903]: 2025-11-28 07:48:55.389754213 +0000 UTC m=+0.219336531 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-type=git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container) Nov 28 02:49:57 localhost podman[36080]: 2025-11-28 07:49:57.191924451 +0000 UTC m=+0.082572441 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=553, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 28 02:49:57 localhost podman[36080]: 2025-11-28 07:49:57.294069173 +0000 UTC m=+0.184717163 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, CEPH_POINT_RELEASE=, release=553, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, ceph=True, com.redhat.component=rhceph-container) Nov 28 02:50:01 localhost systemd-logind[763]: Session 13 logged out. Waiting for processes to exit. Nov 28 02:50:01 localhost systemd[1]: session-13.scope: Deactivated successfully. Nov 28 02:50:01 localhost systemd[1]: session-13.scope: Consumed 21.355s CPU time. Nov 28 02:50:01 localhost systemd-logind[763]: Removed session 13. Nov 28 02:51:53 localhost systemd[26783]: Created slice User Background Tasks Slice. Nov 28 02:51:53 localhost systemd[26783]: Starting Cleanup of User's Temporary Files and Directories... Nov 28 02:51:53 localhost systemd[26783]: Finished Cleanup of User's Temporary Files and Directories. Nov 28 02:53:40 localhost sshd[36455]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:53:40 localhost systemd-logind[763]: New session 27 of user zuul. Nov 28 02:53:40 localhost systemd[1]: Started Session 27 of User zuul. Nov 28 02:53:41 localhost python3[36503]: ansible-ansible.legacy.ping Invoked with data=pong Nov 28 02:53:42 localhost python3[36548]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 02:53:42 localhost python3[36568]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005538515.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Nov 28 02:53:43 localhost python3[36624]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:53:43 localhost python3[36667]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764316423.0875354-66621-79331243140527/source _original_basename=tmpotbj1vh6 follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:53:44 localhost python3[36697]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:53:44 localhost python3[36713]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:53:44 localhost python3[36729]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:53:45 localhost python3[36745]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnBivukZgTjr1SoC29hE3ofwUMxTaKeXh9gXvDwMJASbvK4q9943cbJ2j47GUf8sEgY38kkU/dxSMQWULl4d2oquIgZpJbJuXMU1WNxwGNSrS74OecQ3Or4VxTiDmu/HV83nIWHqfpDCra4DlrIBPPNwhBK4u0QYy87AJaML6NGEDaubbHgVCg1UpW1ho/sDoXptAehoCEaaeRz5tPHiXRnHpIXu44Sp8fRcyU9rBqdv+/lgachTcMYadsD2WBHIL+pptEDHB5TvQTDpnU58YdGFarn8uuGPP4t8H6xcqXbaJS9/oZa5Fb5Mh3vORBbR65jvlGg4PYGzCuI/xllY5+lGK7eyOleFyRqWKa2uAIaGoRBT4ZLKAssOFwCIaGfOAFFOBMkuylg4+MtbYiMJYRARPSRAufAROqhUDOo73y5lBrXh07aiWuSn8fU4mclWu+Xw382ryxW+XeHPc12d7S46TvGJaRvzsLtlyerRxGI77xOHRexq1Z/SFjOWLOwc= zuul-build-sshkey#012 regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:53:46 localhost python3[36759]: ansible-ping Invoked with data=pong Nov 28 02:53:57 localhost sshd[36760]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:53:57 localhost systemd[1]: Created slice User Slice of UID 1003. Nov 28 02:53:57 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Nov 28 02:53:57 localhost systemd-logind[763]: New session 28 of user tripleo-admin. Nov 28 02:53:57 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Nov 28 02:53:57 localhost systemd[1]: Starting User Manager for UID 1003... Nov 28 02:53:57 localhost systemd[36764]: Queued start job for default target Main User Target. Nov 28 02:53:57 localhost systemd[36764]: Created slice User Application Slice. Nov 28 02:53:57 localhost systemd[36764]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 02:53:57 localhost systemd[36764]: Started Daily Cleanup of User's Temporary Directories. Nov 28 02:53:57 localhost systemd[36764]: Reached target Paths. Nov 28 02:53:57 localhost systemd[36764]: Reached target Timers. Nov 28 02:53:57 localhost systemd[36764]: Starting D-Bus User Message Bus Socket... Nov 28 02:53:57 localhost systemd[36764]: Starting Create User's Volatile Files and Directories... Nov 28 02:53:57 localhost systemd[36764]: Listening on D-Bus User Message Bus Socket. Nov 28 02:53:57 localhost systemd[36764]: Reached target Sockets. Nov 28 02:53:57 localhost systemd[36764]: Finished Create User's Volatile Files and Directories. Nov 28 02:53:57 localhost systemd[36764]: Reached target Basic System. Nov 28 02:53:57 localhost systemd[36764]: Reached target Main User Target. Nov 28 02:53:57 localhost systemd[36764]: Startup finished in 123ms. Nov 28 02:53:57 localhost systemd[1]: Started User Manager for UID 1003. Nov 28 02:53:57 localhost systemd[1]: Started Session 28 of User tripleo-admin. Nov 28 02:53:58 localhost python3[36823]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d Nov 28 02:54:03 localhost python3[36873]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config Nov 28 02:54:04 localhost python3[36921]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None Nov 28 02:54:05 localhost python3[36984]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.pcvp7x4ttmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:54:05 localhost python3[37014]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.pcvp7x4ttmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:54:06 localhost python3[37030]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.pcvp7x4ttmphosts insertbefore=BOF block=172.17.0.106 np0005538513.localdomain np0005538513#012172.18.0.106 np0005538513.storage.localdomain np0005538513.storage#012172.20.0.106 np0005538513.storagemgmt.localdomain np0005538513.storagemgmt#012172.17.0.106 np0005538513.internalapi.localdomain np0005538513.internalapi#012172.19.0.106 np0005538513.tenant.localdomain np0005538513.tenant#012192.168.122.106 np0005538513.ctlplane.localdomain np0005538513.ctlplane#012172.17.0.107 np0005538514.localdomain np0005538514#012172.18.0.107 np0005538514.storage.localdomain np0005538514.storage#012172.20.0.107 np0005538514.storagemgmt.localdomain np0005538514.storagemgmt#012172.17.0.107 np0005538514.internalapi.localdomain np0005538514.internalapi#012172.19.0.107 np0005538514.tenant.localdomain np0005538514.tenant#012192.168.122.107 np0005538514.ctlplane.localdomain np0005538514.ctlplane#012172.17.0.108 np0005538515.localdomain np0005538515#012172.18.0.108 np0005538515.storage.localdomain np0005538515.storage#012172.20.0.108 np0005538515.storagemgmt.localdomain np0005538515.storagemgmt#012172.17.0.108 np0005538515.internalapi.localdomain np0005538515.internalapi#012172.19.0.108 np0005538515.tenant.localdomain np0005538515.tenant#012192.168.122.108 np0005538515.ctlplane.localdomain np0005538515.ctlplane#012172.17.0.103 np0005538510.localdomain np0005538510#012172.18.0.103 np0005538510.storage.localdomain np0005538510.storage#012172.20.0.103 np0005538510.storagemgmt.localdomain np0005538510.storagemgmt#012172.17.0.103 np0005538510.internalapi.localdomain np0005538510.internalapi#012172.19.0.103 np0005538510.tenant.localdomain np0005538510.tenant#012192.168.122.103 np0005538510.ctlplane.localdomain np0005538510.ctlplane#012172.17.0.104 np0005538511.localdomain np0005538511#012172.18.0.104 np0005538511.storage.localdomain np0005538511.storage#012172.20.0.104 np0005538511.storagemgmt.localdomain np0005538511.storagemgmt#012172.17.0.104 np0005538511.internalapi.localdomain np0005538511.internalapi#012172.19.0.104 np0005538511.tenant.localdomain np0005538511.tenant#012192.168.122.104 np0005538511.ctlplane.localdomain np0005538511.ctlplane#012172.17.0.105 np0005538512.localdomain np0005538512#012172.18.0.105 np0005538512.storage.localdomain np0005538512.storage#012172.20.0.105 np0005538512.storagemgmt.localdomain np0005538512.storagemgmt#012172.17.0.105 np0005538512.internalapi.localdomain np0005538512.internalapi#012172.19.0.105 np0005538512.tenant.localdomain np0005538512.tenant#012192.168.122.105 np0005538512.ctlplane.localdomain np0005538512.ctlplane#012#012192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane#012192.168.122.99 overcloud.ctlplane.localdomain#012172.18.0.197 overcloud.storage.localdomain#012172.20.0.177 overcloud.storagemgmt.localdomain#012172.17.0.128 overcloud.internalapi.localdomain#012172.21.0.169 overcloud.localdomain#012 marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:54:07 localhost python3[37046]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.pcvp7x4ttmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:54:07 localhost python3[37063]: ansible-file Invoked with path=/tmp/ansible.pcvp7x4ttmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:54:08 localhost python3[37079]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:54:09 localhost python3[37096]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:54:13 localhost python3[37116]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:54:14 localhost python3[37133]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:55:24 localhost kernel: SELinux: Converting 2699 SID table entries... Nov 28 02:55:24 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:55:24 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:55:24 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:55:24 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:55:24 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:55:24 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:55:24 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:55:24 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=6 res=1 Nov 28 02:55:24 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:55:24 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:55:24 localhost systemd[1]: Reloading. Nov 28 02:55:24 localhost systemd-sysv-generator[37996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:55:24 localhost systemd-rc-local-generator[37991]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:55:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:55:25 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:55:25 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:55:25 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:55:25 localhost systemd[1]: run-r7192d6730b4e4c4b9d51597ca633e445.service: Deactivated successfully. Nov 28 02:55:27 localhost python3[38434]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:29 localhost python3[38573]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:55:30 localhost systemd[1]: Reloading. Nov 28 02:55:30 localhost systemd-rc-local-generator[38596]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:55:30 localhost systemd-sysv-generator[38601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:55:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:55:30 localhost python3[38627]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:55:31 localhost python3[38643]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:32 localhost python3[38660]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 28 02:55:33 localhost python3[38678]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:55:33 localhost python3[38696]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:55:34 localhost python3[38714]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:55:34 localhost systemd[1]: Reloading Network Manager... Nov 28 02:55:34 localhost NetworkManager[5965]: [1764316534.3587] audit: op="reload" arg="0" pid=38717 uid=0 result="success" Nov 28 02:55:34 localhost NetworkManager[5965]: [1764316534.3595] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf)) Nov 28 02:55:34 localhost NetworkManager[5965]: [1764316534.3595] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged Nov 28 02:55:34 localhost systemd[1]: Reloaded Network Manager. Nov 28 02:55:35 localhost python3[38733]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:36 localhost python3[38750]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:55:36 localhost python3[38768]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:55:37 localhost python3[38784]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:55:37 localhost python3[38800]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None Nov 28 02:55:38 localhost python3[38816]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:55:39 localhost python3[38832]: ansible-blockinfile Invoked with path=/tmp/ansible.nk45s4qy block=[192.168.122.106]*,[np0005538513.ctlplane.localdomain]*,[172.17.0.106]*,[np0005538513.internalapi.localdomain]*,[172.18.0.106]*,[np0005538513.storage.localdomain]*,[172.20.0.106]*,[np0005538513.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005538513.tenant.localdomain]*,[np0005538513.localdomain]*,[np0005538513]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCToHi/c1OL/UxMWy2v/t0tcvSlMeoKa6EPBYbcu51p2Gn2UxEPgCRLM9+84Smh2pxAR4Y/5LVm2lbZ9Gf4okHGg5GLIyqzxxqbQHyR+YRljujVEOvksUPuKCptzx9fQj2Ij2t9GPGHc5klgGPIKjx0pza8T37vdz+G9y7zuK5wWI66AeN8y/6dD2hvi1Lp94VRSvTTEo+nUOFSIgsOwqQO+ZSwTgjG1pmtESBe8nkhW0I0BQPX46v9f1PN1LXDg8cN2FSVjQ91RI0uCvTaBYJ3soFBFspgiJ113zapbQCaNwg7lK7ofS0QT5WONP3QIsDAq1gSpWuOdS2DRY4NU3WMd4m5tLbj+ubiWr39rNU/zQiEl8r38aiM0OwOfuQ9S8wxO7phpVCQrbOkYCLLijdy/xTODvP+jYohTMWX8Gh6IVeVtm6SB2Tw3lDBCjpqlclCSs905Xe+mTJ6WYTaz+Q1xgflKEeemzJ0+rt+QZbrmL7u5MUdf/l/yOLAgACNsws=#012[192.168.122.107]*,[np0005538514.ctlplane.localdomain]*,[172.17.0.107]*,[np0005538514.internalapi.localdomain]*,[172.18.0.107]*,[np0005538514.storage.localdomain]*,[172.20.0.107]*,[np0005538514.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005538514.tenant.localdomain]*,[np0005538514.localdomain]*,[np0005538514]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLIqwhlOevSQuHXF0nrkLOzRoSQqnWWb/cXzK4um93clqGujVOE9PUyL6ONBo/qlr4Pp+QMzSsIFwjW1T/6G+Ce2CS/TGphIUxvvB9NhBt+OJl/zDUEmjAU6bwVIx6ApqtimsXWWIap9GEtVWA5P9pcqPMyGzq1mCzwCS252Ylioij0zZxfMrxTt3RSsWrDED61vRes0ZKd8HERTLN+Lzis5t8f74zfwTesOea6CRkIHth4cUP7ua3q+KhhbhIPj+fXWN5w+qVbcTMJSYyUPsZ2ymPhR4x4db1oPk1Jg14dw1BnmAZZl3v8o4l7bUQ2Fj/PE1JbSiApxbK+V0KdZGMrG4iVbnMmzwBXPXHa6lNQGneflVd3MNEepnTnXx4hAVpaJHc8EtIREq8aPe07DW0wL9clpTKaSGU2Ma+BLXmSDPkuPh6JWLxn3iM1yybL574NnGt2MgBj6z2tiSb4NkNmaBkoG8PMHw8YUSabKBBZNiMEO2GKBpHZldSrYvOZHU=#012[192.168.122.108]*,[np0005538515.ctlplane.localdomain]*,[172.17.0.108]*,[np0005538515.internalapi.localdomain]*,[172.18.0.108]*,[np0005538515.storage.localdomain]*,[172.20.0.108]*,[np0005538515.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005538515.tenant.localdomain]*,[np0005538515.localdomain]*,[np0005538515]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbwc/gBZF5hmsFU6BSK1/DT5hduj2+3ukzoCGLU6mgpBv7BiInN7vVOqXilL+QUAWOvfKTekaQe1Vv/2jpygQnlu6MEMopmac/36IfVjgt39zxCULfSWv3Gp8tLP0ATF2LfhHBWFrGX7G3Bg3AiNfIUnQIQadBaKIByl+FfA7nJ7phwBAwJaQxvByGDeMwC2CWIUPgVqKclcw1WmldPnNmwquLlCbAeMV2hHlBfnVk8BI6fsOUcBB6a05zRpJpbrl584F+qkiQX0RpZYJQdZCoLiJStJv39lYhgiAWChUOVJsCbeNQnC9/Xgs5JhmRESgXh7Tm+8UNW9DxSHN7BS5qKYPUULdjobSp2v9pFOx30MLMsNd5r3JE07pgm5PpjuviSGEvJ8DIAPTF3kUXM43wax1q9rGV4ZfoJiLAwS9CmWWDWZDg17cnC5z+3qi+K8HUKz8LxQCHI+yEtTFzUEYyXTQfQbNvkauEHI/PwFA1iC+4/2g/0UhtjkM+FO2Czwk=#012[192.168.122.103]*,[np0005538510.ctlplane.localdomain]*,[172.17.0.103]*,[np0005538510.internalapi.localdomain]*,[172.18.0.103]*,[np0005538510.storage.localdomain]*,[172.20.0.103]*,[np0005538510.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005538510.tenant.localdomain]*,[np0005538510.localdomain]*,[np0005538510]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAxqgPHnyGChl6yd1/HRo8ox+w8llSVhIj8iYUdDG7IquyLr4/CZguZzRkngbXi/Dq544iKS4kFL/zPKi+yuxeFs4b6fgo4vGoV8wwKNSJXx0d0hOQa9651VqB6k/trENRTgLa2fHkXgF+/g0f7HvloQfhr7qjhTBRV4l4UfJiOEpMvMxN6map/D0JuHlAZGZ5mGUoBTEMuPGEPvMWqe0kc/I8WIgsMsvijOGM2xDxsOqAYlV9a8faoyMdacWUNkeQTfPF6h+z8xdvP8qWPtrPKWHMpcGicTI6pFZ2JxOjWnMaBXs2j/CN7HFLbyOCwuAvAu9efAbxJvgtZlO++6kSlq7SHMzwv7PLP69GaQJHR+jANJ/O2BchbxL09mIkpFSzLSS0k7xXJlwqnAMciIlTaud2n5Hqnnb06WgtvD6O0nnuCLH5am7F1YDGJJgUmNbbgF1PuwzOZqQy+tA2igji/n2z87KkGZdIbrHdPU1PPIlzVGPO6aO02RhvtD+/iQM=#012[192.168.122.104]*,[np0005538511.ctlplane.localdomain]*,[172.17.0.104]*,[np0005538511.internalapi.localdomain]*,[172.18.0.104]*,[np0005538511.storage.localdomain]*,[172.20.0.104]*,[np0005538511.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005538511.tenant.localdomain]*,[np0005538511.localdomain]*,[np0005538511]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqtmgm0KAOOIJ7a8whlZPfasnwJpfcm6zVmQjiKHZZrcojE/a6oALfufKXbfWWiLjJ2VzyK9v7QPNXhIWxgAKT9J40A1lSpSAmmxMaWvy+hzzvePs0Z4Fc4bFX7V4zBGI+dAJ+eAu73z6OKNuMhxBrL46ejpRFbqjwBP3veWRiLOMbyPn+Wc+amop0p1eEzV2QHMAIC5Dwm6/tYNLixNSa/Ea0ciaY3jWii+IGhYy+wqQP+9qkoVf9bZ4Ewa+7UfXI/q4zvvic/Znb8ZpCpezLnH4ilBORLyV9r/wkkkVGY7UVgUdSoLVjzTGQAtHl2ZgA3zJ2F2ES9QcBEvrHygT4vGgtEaxQn8XFhBwhzCpPaLyXti/6d+8M36cJx+7gv1eEfDgLz3MNR+tcnFSew9N6dIN4afV0DvA/9FsWk8PTqddN4iHcZzRo0GiDJWNtB+gYVZOytTYMZm2Cyv59IthEzxaB+wTZoSdCeuEeTM0ohYspOKirIPqMPuCbGbtrJFE=#012[192.168.122.105]*,[np0005538512.ctlplane.localdomain]*,[172.17.0.105]*,[np0005538512.internalapi.localdomain]*,[172.18.0.105]*,[np0005538512.storage.localdomain]*,[172.20.0.105]*,[np0005538512.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005538512.tenant.localdomain]*,[np0005538512.localdomain]*,[np0005538512]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy9/gxqH+eMqafXwUuPf+1Clpw4qsugdFefisnCDhJ5U7Pc+eWMUQVMS0ErxabBJhneDOyPwXwIbv72cEAtmgfvHDlSuS3mt8LRzKqsv1dXTy4Zqb3JGVzrvxo0iczGRsn2MIDJUv/Zjq9YqVeCnDj2HOwV+qx+EFecEFXS797FxsnMmTw0A5z8yUtBuJEGAKQX96LpZc4k5ltq+Uy0rK85Kk7cGR4A+wrIChLC8wggxvA99NdPEBtne6Chb+3PcbYUcTGhGtV6FGzpgbWmuWT/gcANb+fJE5/4n87loLmBMsmvGhvQuN9kuJ20g6nwPJbPTpIbV6XALx4tbma68bL3RL+lcGlh3jf0pEXPfolrB/MRmJn5ggMLjRv50FrowQalnCEgWE0gtd9IGjmqFz3jP008bGotn9rcacbjC2AvE+5NEjp7TzXGnFcD6jW8+9AWiusCww4ULs/oWbi0GLkmhwU5EifitDYF2+r1CigAdlEjb6sa0wAQSmclWk6guM=#012 create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:55:39 localhost python3[38848]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.nk45s4qy' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:39 localhost python3[38866]: ansible-file Invoked with path=/tmp/ansible.nk45s4qy state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:55:40 localhost python3[38882]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 02:55:41 localhost python3[38898]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:41 localhost python3[38916]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:41 localhost python3[38935]: ansible-community.general.cloud_init_data_facts Invoked with filter=status Nov 28 02:55:44 localhost python3[39072]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:45 localhost python3[39089]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:55:48 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 02:55:48 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 02:55:48 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:55:48 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:55:48 localhost systemd[1]: Reloading. Nov 28 02:55:48 localhost systemd-rc-local-generator[39167]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:55:48 localhost systemd-sysv-generator[39170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:55:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:55:48 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:55:48 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Nov 28 02:55:48 localhost systemd[1]: tuned.service: Deactivated successfully. Nov 28 02:55:48 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Nov 28 02:55:48 localhost systemd[1]: tuned.service: Consumed 1.722s CPU time. Nov 28 02:55:48 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 28 02:55:48 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:55:48 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:55:48 localhost systemd[1]: run-r0c642f159fba4057b9d2f8270231ada6.service: Deactivated successfully. Nov 28 02:55:50 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 28 02:55:50 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:55:50 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:55:50 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:55:50 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:55:50 localhost systemd[1]: run-r1730aa72abef4569a7698fa9dabccad7.service: Deactivated successfully. Nov 28 02:55:51 localhost python3[39526]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:55:51 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Nov 28 02:55:51 localhost systemd[1]: tuned.service: Deactivated successfully. Nov 28 02:55:51 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Nov 28 02:55:51 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 28 02:55:52 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 28 02:55:53 localhost python3[39721]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:54 localhost python3[39738]: ansible-slurp Invoked with src=/etc/tuned/active_profile Nov 28 02:55:54 localhost python3[39754]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:55:55 localhost python3[39770]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:57 localhost python3[39790]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:55:57 localhost python3[39807]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:56:00 localhost python3[39823]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:05 localhost python3[39839]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:05 localhost python3[39887]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:06 localhost python3[39932]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316565.3866425-71269-4171597198821/source _original_basename=tmpkd232841 follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:06 localhost python3[39976]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:07 localhost python3[40071]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:07 localhost python3[40114]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316567.0255291-71365-161059675954738/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=f62dcfb681d1b393d0933e3027f5bdff5685b671 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:08 localhost python3[40191]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:08 localhost python3[40234]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316567.9728765-71498-216974929580161/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=526fa277b7a2f2320a39d589994ce8c8af83f91d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:09 localhost python3[40296]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:09 localhost python3[40339]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316568.8568509-71498-268617579071918/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=a223df0bad6272fbaedbfa3b3952717db2fe2201 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:10 localhost python3[40401]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:10 localhost python3[40444]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316569.7610092-71498-248799131100880/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:11 localhost python3[40506]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:11 localhost python3[40549]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316570.7028797-71498-156670614555741/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:11 localhost python3[40611]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:12 localhost python3[40654]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316571.5210068-71498-175478226132879/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=62e0064aaeb633d534c066293fb50230d01591cd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:12 localhost python3[40716]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:13 localhost python3[40759]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316572.383641-71498-94396251438356/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:13 localhost python3[40821]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:14 localhost python3[40864]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316573.29409-71498-259177846658353/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=8f5fcf4d1773fc71cd0863786080c50634c31bf2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:14 localhost python3[40926]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:14 localhost python3[40969]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316574.1672018-71498-280473941620089/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:15 localhost python3[41031]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:15 localhost python3[41074]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316575.0305808-71498-156474437425396/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:16 localhost python3[41136]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:16 localhost python3[41179]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316575.9257348-71498-182361790183983/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=fbd352828bf2a24978bac89caf2b80ad6306db82 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:17 localhost python3[41209]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:56:17 localhost python3[41257]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:56:18 localhost python3[41300]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316577.6255393-72137-238168634886834/source _original_basename=tmp05ku064p follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:56:23 localhost python3[41330]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 02:56:23 localhost python3[41391]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:28 localhost python3[41408]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:33 localhost python3[41425]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:34 localhost python3[41448]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:38 localhost python3[41465]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:39 localhost python3[41488]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:43 localhost python3[41505]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:48 localhost python3[41522]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:48 localhost python3[41545]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:53 localhost python3[41562]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:53 localhost systemd[36764]: Starting Mark boot as successful... Nov 28 02:56:53 localhost systemd[36764]: Finished Mark boot as successful. Nov 28 02:56:57 localhost python3[41580]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:56:58 localhost python3[41603]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:02 localhost python3[41620]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:07 localhost python3[41637]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")#012MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")#012echo "$INT $MTU"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:07 localhost python3[41660]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:11 localhost python3[41753]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:17 localhost python3[41770]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:17 localhost python3[41818]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:17 localhost python3[41836]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmps_4gqnva recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:18 localhost python3[41866]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:19 localhost python3[41914]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:19 localhost python3[41932]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:20 localhost python3[41994]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:20 localhost python3[42012]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:20 localhost python3[42074]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:21 localhost python3[42092]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:21 localhost python3[42154]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:22 localhost python3[42172]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:22 localhost python3[42234]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:22 localhost python3[42252]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:23 localhost python3[42314]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:23 localhost python3[42332]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:24 localhost python3[42394]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:24 localhost python3[42412]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:24 localhost python3[42474]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:25 localhost python3[42492]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:25 localhost python3[42554]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:25 localhost python3[42572]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:26 localhost python3[42634]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:26 localhost python3[42652]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:27 localhost python3[42714]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:27 localhost python3[42732]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:27 localhost python3[42762]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:57:28 localhost python3[42810]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:28 localhost python3[42828]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpl_ja7pqv recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:31 localhost python3[42858]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:57:36 localhost python3[42875]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:57:36 localhost python3[42893]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:57:38 localhost python3[42911]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:57:38 localhost systemd[1]: Reloading. Nov 28 02:57:38 localhost systemd-rc-local-generator[42936]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:57:38 localhost systemd-sysv-generator[42942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:57:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:57:38 localhost systemd[1]: Starting Netfilter Tables... Nov 28 02:57:38 localhost systemd[1]: Finished Netfilter Tables. Nov 28 02:57:39 localhost python3[43000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:39 localhost python3[43043]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316659.2863007-75112-91271994407602/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:40 localhost python3[43073]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:40 localhost python3[43091]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:41 localhost python3[43140]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:41 localhost python3[43183]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316661.031042-75291-164256847562180/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:42 localhost python3[43245]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:42 localhost python3[43288]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316662.0064602-75353-113716215427850/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:43 localhost python3[43350]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:43 localhost python3[43393]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316663.0663264-75417-217519372544593/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:44 localhost python3[43455]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:44 localhost python3[43498]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316663.9738405-75467-161828163473985/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:45 localhost python3[43560]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:57:46 localhost python3[43603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316664.8835588-75499-83727017972286/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:46 localhost python3[43633]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:47 localhost python3[43698]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/tripleo-chains.nft"#012include "/etc/nftables/tripleo-rules.nft"#012include "/etc/nftables/tripleo-jumps.nft"#012 state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:57:47 localhost python3[43715]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:47 localhost python3[43732]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:48 localhost python3[43751]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:57:48 localhost python3[43767]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:57:49 localhost python3[43783]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:57:49 localhost python3[43799]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Nov 28 02:57:50 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=7 res=1 Nov 28 02:57:50 localhost python3[43819]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 28 02:57:51 localhost kernel: SELinux: Converting 2703 SID table entries... Nov 28 02:57:51 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:57:51 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:57:51 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:57:51 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:57:51 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:57:51 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:57:51 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:57:51 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=8 res=1 Nov 28 02:57:51 localhost python3[43840]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 28 02:57:52 localhost kernel: SELinux: Converting 2703 SID table entries... Nov 28 02:57:52 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:57:52 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:57:52 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:57:52 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:57:52 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:57:52 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:57:52 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:57:52 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=9 res=1 Nov 28 02:57:53 localhost python3[43861]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 28 02:57:53 localhost kernel: SELinux: Converting 2703 SID table entries... Nov 28 02:57:53 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:57:53 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:57:53 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:57:53 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:57:53 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:57:53 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:57:53 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:57:54 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=10 res=1 Nov 28 02:57:54 localhost python3[43882]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:57:54 localhost python3[43898]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:57:55 localhost python3[43914]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:57:55 localhost python3[43930]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:57:56 localhost python3[43946]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:57:56 localhost python3[43963]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:58:00 localhost python3[43980]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:00 localhost python3[44028]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:01 localhost python3[44071]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316680.6064835-76419-180082442120474/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:02 localhost python3[44101]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:58:02 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 28 02:58:02 localhost systemd[1]: Stopped Load Kernel Modules. Nov 28 02:58:02 localhost systemd[1]: Stopping Load Kernel Modules... Nov 28 02:58:02 localhost systemd[1]: Starting Load Kernel Modules... Nov 28 02:58:02 localhost kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 28 02:58:02 localhost kernel: Bridge firewalling registered Nov 28 02:58:02 localhost systemd-modules-load[44104]: Inserted module 'br_netfilter' Nov 28 02:58:02 localhost systemd-modules-load[44104]: Module 'msr' is built in Nov 28 02:58:02 localhost systemd[1]: Finished Load Kernel Modules. Nov 28 02:58:03 localhost python3[44155]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:03 localhost python3[44198]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316682.8042061-76494-187475773523860/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:03 localhost python3[44228]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:04 localhost python3[44245]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:04 localhost python3[44263]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:04 localhost python3[44281]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:05 localhost python3[44298]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:05 localhost python3[44315]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:05 localhost python3[44332]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:06 localhost python3[44350]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:06 localhost python3[44368]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:06 localhost python3[44386]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:07 localhost python3[44404]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:07 localhost python3[44422]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:07 localhost python3[44440]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:07 localhost python3[44458]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:08 localhost python3[44475]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:08 localhost python3[44492]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:08 localhost python3[44509]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:09 localhost python3[44526]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False Nov 28 02:58:09 localhost python3[44544]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 02:58:09 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 28 02:58:09 localhost systemd[1]: Stopped Apply Kernel Variables. Nov 28 02:58:09 localhost systemd[1]: Stopping Apply Kernel Variables... Nov 28 02:58:09 localhost systemd[1]: Starting Apply Kernel Variables... Nov 28 02:58:09 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 28 02:58:09 localhost systemd[1]: Finished Apply Kernel Variables. Nov 28 02:58:10 localhost python3[44564]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:10 localhost python3[44580]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:10 localhost python3[44596]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:11 localhost python3[44612]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:58:11 localhost python3[44639]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:11 localhost python3[44674]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:12 localhost python3[44711]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:12 localhost python3[44757]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:12 localhost python3[44786]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:13 localhost python3[44851]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:13 localhost python3[44909]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316692.8450131-76841-145110471484068/source _original_basename=tmp_0jdipv_ follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:13 localhost python3[44939]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:16 localhost python3[44956]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:16 localhost python3[45004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:17 localhost python3[45047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316696.285853-77052-28241370299002/source _original_basename=tmpqkpgen63 follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:17 localhost python3[45077]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:17 localhost python3[45093]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:18 localhost python3[45109]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:18 localhost python3[45125]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:18 localhost python3[45141]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:19 localhost python3[45157]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:19 localhost python3[45173]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:19 localhost python3[45189]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:20 localhost python3[45205]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:20 localhost python3[45221]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False Nov 28 02:58:20 localhost python3[45243]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005538515.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None Nov 28 02:58:21 localhost python3[45267]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None Nov 28 02:58:21 localhost python3[45283]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:22 localhost python3[45332]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:22 localhost python3[45375]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316702.0181687-77376-265437569415924/source _original_basename=tmpu0rjcyih follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:23 localhost python3[45405]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Nov 28 02:58:24 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=11 res=1 Nov 28 02:58:24 localhost python3[45497]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:24 localhost python3[45513]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:25 localhost python3[45529]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False Nov 28 02:58:26 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=12 res=1 Nov 28 02:58:26 localhost python3[45550]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:58:29 localhost python3[45567]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 02:58:30 localhost python3[45628]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:30 localhost python3[45644]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:31 localhost python3[45704]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:31 localhost python3[45747]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316711.00285-77866-91826337183377/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=2f5a399fbfa982ef0876ce5d0ff30a44474c412f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:32 localhost python3[45809]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:32 localhost python3[45854]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316711.9728777-77904-47598495393176/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:33 localhost python3[45884]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:33 localhost python3[45900]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:33 localhost python3[45916]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:34 localhost python3[45932]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 02:58:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 3405 writes, 16K keys, 3405 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s#012Cumulative WAL: 3405 writes, 206 syncs, 16.53 writes per sync, written: 0.01 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3405 writes, 16K keys, 3405 commit groups, 1.0 writes per commit group, ingest: 15.31 MB, 0.03 MB/s#012Interval WAL: 3405 writes, 206 syncs, 16.53 writes per sync, written: 0.01 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Nov 28 02:58:35 localhost python3[45980]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:35 localhost python3[46023]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316714.6777782-78040-191918879047508/source _original_basename=tmp_nbd7txt follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:35 localhost python3[46053]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:36 localhost python3[46069]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:36 localhost python3[46085]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:58:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 02:58:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3247 writes, 16K keys, 3247 commit groups, 1.0 writes per commit group, ingest: 14.62 MB, 0.02 MB/s#012Interval WAL: 3247 writes, 139 syncs, 23.36 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Nov 28 02:58:40 localhost python3[46134]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:41 localhost python3[46179]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316720.3253143-78346-266047300254133/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:41 localhost python3[46210]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:58:41 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 28 02:58:41 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 28 02:58:41 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 28 02:58:41 localhost systemd[1]: sshd.service: Consumed 14.274s CPU time, read 2.1M from disk, written 428.0K to disk. Nov 28 02:58:41 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 28 02:58:41 localhost systemd[1]: Stopping sshd-keygen.target... Nov 28 02:58:41 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 02:58:41 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 02:58:41 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 02:58:41 localhost systemd[1]: Reached target sshd-keygen.target. Nov 28 02:58:41 localhost systemd[1]: Starting OpenSSH server daemon... Nov 28 02:58:41 localhost sshd[46214]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:58:41 localhost systemd[1]: Started OpenSSH server daemon. Nov 28 02:58:42 localhost python3[46230]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:43 localhost python3[46248]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:44 localhost python3[46266]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:58:47 localhost python3[46315]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:48 localhost python3[46333]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:49 localhost python3[46363]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:58:49 localhost python3[46413]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:49 localhost python3[46431]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:50 localhost python3[46461]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 02:58:50 localhost systemd[1]: Reloading. Nov 28 02:58:50 localhost systemd-sysv-generator[46489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:58:50 localhost systemd-rc-local-generator[46486]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:58:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:58:50 localhost systemd[1]: Starting chronyd online sources service... Nov 28 02:58:50 localhost chronyc[46500]: 200 OK Nov 28 02:58:50 localhost systemd[1]: chrony-online.service: Deactivated successfully. Nov 28 02:58:50 localhost systemd[1]: Finished chronyd online sources service. Nov 28 02:58:51 localhost python3[46516]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:51 localhost chronyd[26579]: System clock was stepped by -0.000044 seconds Nov 28 02:58:51 localhost python3[46533]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:52 localhost python3[46550]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:52 localhost chronyd[26579]: System clock was stepped by 0.000000 seconds Nov 28 02:58:52 localhost python3[46567]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:52 localhost python3[46584]: ansible-timezone Invoked with name=UTC hwclock=None Nov 28 02:58:52 localhost systemd[1]: Starting Time & Date Service... Nov 28 02:58:52 localhost systemd[1]: Started Time & Date Service. Nov 28 02:58:53 localhost python3[46604]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:54 localhost python3[46621]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:55 localhost python3[46638]: ansible-slurp Invoked with src=/etc/tuned/active_profile Nov 28 02:58:55 localhost python3[46654]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:58:56 localhost python3[46670]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:56 localhost python3[46686]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:56 localhost python3[46734]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:57 localhost python3[46777]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316736.6169143-79361-80936644226304/source _original_basename=tmpz63bned5 follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:57 localhost python3[46839]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:58:58 localhost python3[46882]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316737.4807377-79416-164637156526161/source _original_basename=tmpkuxti0_7 follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:58:58 localhost python3[46912]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 28 02:58:58 localhost systemd[1]: Reloading. Nov 28 02:58:58 localhost systemd-sysv-generator[46942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:58:58 localhost systemd-rc-local-generator[46937]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:58:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:58:59 localhost python3[46966]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:58:59 localhost python3[46982]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:58:59 localhost systemd[36764]: Created slice User Background Tasks Slice. Nov 28 02:58:59 localhost systemd[36764]: Starting Cleanup of User's Temporary Files and Directories... Nov 28 02:58:59 localhost systemd[36764]: Finished Cleanup of User's Temporary Files and Directories. Nov 28 02:59:00 localhost python3[47000]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:59:00 localhost systemd[1]: run-netns-ns_temp.mount: Deactivated successfully. Nov 28 02:59:00 localhost python3[47017]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:59:00 localhost python3[47033]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:59:01 localhost python3[47081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:59:01 localhost python3[47124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316740.9649713-79678-67147584158396/source _original_basename=tmp5iut2dii follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:59:02 localhost sshd[47139]: main: sshd: ssh-rsa algorithm is disabled Nov 28 02:59:22 localhost python3[47232]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 02:59:23 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 28 02:59:23 localhost python3[47250]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None Nov 28 02:59:23 localhost python3[47266]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 02:59:23 localhost python3[47282]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:59:24 localhost python3[47298]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:59:24 localhost python3[47314]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None Nov 28 02:59:25 localhost kernel: SELinux: Converting 2706 SID table entries... Nov 28 02:59:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:59:25 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:59:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:59:25 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:59:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:59:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:59:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:59:25 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=13 res=1 Nov 28 02:59:26 localhost python3[47339]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 02:59:27 localhost python3[47476]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': [ Nov 28 02:59:28 localhost rsyslogd[758]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ] Nov 28 02:59:28 localhost python3[47492]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 02:59:28 localhost python3[47508]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 02:59:29 localhost python3[47524]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}} Nov 28 02:59:35 localhost python3[47572]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 02:59:35 localhost python3[47615]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316774.8966718-81139-96789904088545/source _original_basename=tmpi28phjyh follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 02:59:36 localhost python3[47645]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 02:59:38 localhost python3[47768]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 02:59:40 localhost python3[47889]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 28 02:59:42 localhost python3[47905]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 02:59:43 localhost python3[47922]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 02:59:46 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 02:59:46 localhost dbus-broker-launch[14507]: Noticed file-system modification, trigger reload. Nov 28 02:59:46 localhost dbus-broker-launch[14507]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored Nov 28 02:59:46 localhost dbus-broker-launch[14507]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored Nov 28 02:59:46 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 02:59:47 localhost systemd[1]: Reexecuting. Nov 28 02:59:47 localhost systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 28 02:59:47 localhost systemd[1]: Detected virtualization kvm. Nov 28 02:59:47 localhost systemd[1]: Detected architecture x86-64. Nov 28 02:59:47 localhost systemd-rc-local-generator[47977]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:59:47 localhost systemd-sysv-generator[47983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:59:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:59:55 localhost kernel: SELinux: Converting 2706 SID table entries... Nov 28 02:59:55 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 02:59:55 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 02:59:55 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 02:59:55 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 02:59:55 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 02:59:55 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 02:59:55 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 02:59:55 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 02:59:55 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=14 res=1 Nov 28 02:59:55 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 02:59:56 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:59:56 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 02:59:56 localhost systemd[1]: Reloading. Nov 28 02:59:56 localhost systemd-rc-local-generator[48049]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:59:56 localhost systemd-sysv-generator[48052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:59:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:59:56 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 02:59:56 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:59:56 localhost systemd-journald[618]: Journal stopped Nov 28 02:59:57 localhost systemd[1]: Stopping Journal Service... Nov 28 02:59:57 localhost systemd-journald[618]: Received SIGTERM from PID 1 (systemd). Nov 28 02:59:57 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files... Nov 28 02:59:57 localhost systemd[1]: systemd-journald.service: Deactivated successfully. Nov 28 02:59:57 localhost systemd[1]: Stopped Journal Service. Nov 28 02:59:57 localhost systemd[1]: systemd-journald.service: Consumed 2.545s CPU time. Nov 28 02:59:57 localhost systemd[1]: Starting Journal Service... Nov 28 02:59:57 localhost systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 28 02:59:57 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files. Nov 28 02:59:57 localhost systemd[1]: systemd-udevd.service: Consumed 3.147s CPU time. Nov 28 02:59:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files... Nov 28 02:59:57 localhost systemd-journald[48427]: Journal started Nov 28 02:59:57 localhost systemd-journald[48427]: Runtime Journal (/run/log/journal/5cd59ba25ae47acac865224fa46a5f9e) is 12.8M, max 314.7M, 301.9M free. Nov 28 02:59:57 localhost systemd[1]: Started Journal Service. Nov 28 02:59:57 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation. Nov 28 02:59:57 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 02:59:57 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 02:59:57 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 02:59:57 localhost systemd-udevd[48433]: Using default interface naming scheme 'rhel-9.0'. Nov 28 02:59:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files. Nov 28 02:59:57 localhost systemd[1]: Reloading. Nov 28 02:59:57 localhost systemd-rc-local-generator[49050]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 02:59:57 localhost systemd-sysv-generator[49054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 02:59:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 02:59:57 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 02:59:57 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 02:59:57 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 02:59:57 localhost systemd[1]: man-db-cache-update.service: Consumed 1.288s CPU time. Nov 28 02:59:57 localhost systemd[1]: run-r9e51df2d70664b349fb2ee3c2b2c60c0.service: Deactivated successfully. Nov 28 02:59:57 localhost systemd[1]: run-rc26f846339ab422d9c3850f3df48cfb5.service: Deactivated successfully. Nov 28 02:59:59 localhost python3[49417]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False Nov 28 02:59:59 localhost python3[49436]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 03:00:00 localhost python3[49454]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:00:00 localhost python3[49454]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json Nov 28 03:00:00 localhost python3[49454]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false Nov 28 03:00:17 localhost podman[49466]: 2025-11-28 08:00:00.939634721 +0000 UTC m=+0.030707961 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 28 03:00:17 localhost python3[49454]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json Nov 28 03:00:17 localhost systemd[1]: tmp-crun.iIYSCi.mount: Deactivated successfully. Nov 28 03:00:17 localhost podman[49652]: 2025-11-28 08:00:17.409383502 +0000 UTC m=+0.094884278 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 03:00:17 localhost python3[49681]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:00:17 localhost python3[49681]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json Nov 28 03:00:17 localhost podman[49652]: 2025-11-28 08:00:17.537401966 +0000 UTC m=+0.222902782 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, release=553, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55) Nov 28 03:00:17 localhost python3[49681]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false Nov 28 03:00:25 localhost podman[49729]: 2025-11-28 08:00:17.687997817 +0000 UTC m=+0.042470256 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 28 03:00:25 localhost python3[49681]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json Nov 28 03:00:26 localhost python3[49924]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:00:26 localhost python3[49924]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json Nov 28 03:00:26 localhost python3[49924]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false Nov 28 03:00:43 localhost podman[49937]: 2025-11-28 08:00:26.376281055 +0000 UTC m=+0.043795087 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:00:43 localhost python3[49924]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json Nov 28 03:00:43 localhost python3[50299]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:00:43 localhost python3[50299]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json Nov 28 03:00:43 localhost python3[50299]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false Nov 28 03:00:57 localhost podman[50312]: 2025-11-28 08:00:44.008463072 +0000 UTC m=+0.040243856 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:00:57 localhost python3[50299]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json Nov 28 03:00:57 localhost python3[50395]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:00:57 localhost python3[50395]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json Nov 28 03:00:57 localhost python3[50395]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false Nov 28 03:01:05 localhost podman[50408]: 2025-11-28 08:00:57.995379575 +0000 UTC m=+0.031002361 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 28 03:01:05 localhost python3[50395]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json Nov 28 03:01:05 localhost python3[50758]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:01:05 localhost python3[50758]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json Nov 28 03:01:05 localhost python3[50758]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false Nov 28 03:01:11 localhost podman[50770]: 2025-11-28 08:01:05.99363467 +0000 UTC m=+0.044023584 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 28 03:01:11 localhost python3[50758]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json Nov 28 03:01:11 localhost python3[50849]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:01:11 localhost python3[50849]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json Nov 28 03:01:11 localhost python3[50849]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false Nov 28 03:01:13 localhost podman[50862]: 2025-11-28 08:01:11.815459746 +0000 UTC m=+0.047358561 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 28 03:01:13 localhost python3[50849]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json Nov 28 03:01:14 localhost python3[50941]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:01:14 localhost python3[50941]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json Nov 28 03:01:14 localhost python3[50941]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false Nov 28 03:01:16 localhost podman[50955]: 2025-11-28 08:01:14.354340683 +0000 UTC m=+0.045486536 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 28 03:01:16 localhost python3[50941]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json Nov 28 03:01:16 localhost python3[51033]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:01:16 localhost python3[51033]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json Nov 28 03:01:16 localhost python3[51033]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false Nov 28 03:01:19 localhost podman[51046]: 2025-11-28 08:01:16.891433899 +0000 UTC m=+0.043991104 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Nov 28 03:01:19 localhost python3[51033]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json Nov 28 03:01:19 localhost python3[51125]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:01:19 localhost python3[51125]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json Nov 28 03:01:19 localhost python3[51125]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false Nov 28 03:01:22 localhost podman[51138]: 2025-11-28 08:01:19.571884204 +0000 UTC m=+0.047978150 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 28 03:01:22 localhost python3[51125]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json Nov 28 03:01:23 localhost python3[51289]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None Nov 28 03:01:23 localhost python3[51289]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json Nov 28 03:01:23 localhost python3[51289]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false Nov 28 03:01:25 localhost podman[51302]: 2025-11-28 08:01:23.393047174 +0000 UTC m=+0.045160517 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 28 03:01:25 localhost python3[51289]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json Nov 28 03:01:25 localhost python3[51394]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:01:27 localhost ansible-async_wrapper.py[51566]: Invoked with 4007624472 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316886.9211266-84042-242228584962139/AnsiballZ_command.py _ Nov 28 03:01:27 localhost ansible-async_wrapper.py[51569]: Starting module and watcher Nov 28 03:01:27 localhost ansible-async_wrapper.py[51569]: Start watching 51570 (3600) Nov 28 03:01:27 localhost ansible-async_wrapper.py[51570]: Start module (51570) Nov 28 03:01:27 localhost ansible-async_wrapper.py[51566]: Return async_wrapper task started. Nov 28 03:01:27 localhost python3[51587]: ansible-ansible.legacy.async_status Invoked with jid=4007624472.51566 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:01:31 localhost puppet-user[51590]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:31 localhost puppet-user[51590]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:31 localhost puppet-user[51590]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:31 localhost puppet-user[51590]: (file & line not available) Nov 28 03:01:31 localhost puppet-user[51590]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:31 localhost puppet-user[51590]: (file & line not available) Nov 28 03:01:31 localhost puppet-user[51590]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 28 03:01:31 localhost puppet-user[51590]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 28 03:01:31 localhost puppet-user[51590]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.12 seconds Nov 28 03:01:31 localhost puppet-user[51590]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully Nov 28 03:01:31 localhost puppet-user[51590]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created Nov 28 03:01:31 localhost puppet-user[51590]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully Nov 28 03:01:31 localhost puppet-user[51590]: Notice: Applied catalog in 0.05 seconds Nov 28 03:01:31 localhost puppet-user[51590]: Application: Nov 28 03:01:31 localhost puppet-user[51590]: Initial environment: production Nov 28 03:01:31 localhost puppet-user[51590]: Converged environment: production Nov 28 03:01:31 localhost puppet-user[51590]: Run mode: user Nov 28 03:01:31 localhost puppet-user[51590]: Changes: Nov 28 03:01:31 localhost puppet-user[51590]: Total: 3 Nov 28 03:01:31 localhost puppet-user[51590]: Events: Nov 28 03:01:31 localhost puppet-user[51590]: Success: 3 Nov 28 03:01:31 localhost puppet-user[51590]: Total: 3 Nov 28 03:01:31 localhost puppet-user[51590]: Resources: Nov 28 03:01:31 localhost puppet-user[51590]: Changed: 3 Nov 28 03:01:31 localhost puppet-user[51590]: Out of sync: 3 Nov 28 03:01:31 localhost puppet-user[51590]: Total: 10 Nov 28 03:01:31 localhost puppet-user[51590]: Time: Nov 28 03:01:31 localhost puppet-user[51590]: Schedule: 0.00 Nov 28 03:01:31 localhost puppet-user[51590]: File: 0.00 Nov 28 03:01:31 localhost puppet-user[51590]: Exec: 0.01 Nov 28 03:01:31 localhost puppet-user[51590]: Augeas: 0.02 Nov 28 03:01:31 localhost puppet-user[51590]: Transaction evaluation: 0.05 Nov 28 03:01:31 localhost puppet-user[51590]: Catalog application: 0.05 Nov 28 03:01:31 localhost puppet-user[51590]: Config retrieval: 0.16 Nov 28 03:01:31 localhost puppet-user[51590]: Last run: 1764316891 Nov 28 03:01:31 localhost puppet-user[51590]: Filebucket: 0.00 Nov 28 03:01:31 localhost puppet-user[51590]: Total: 0.05 Nov 28 03:01:31 localhost puppet-user[51590]: Version: Nov 28 03:01:31 localhost puppet-user[51590]: Config: 1764316891 Nov 28 03:01:31 localhost puppet-user[51590]: Puppet: 7.10.0 Nov 28 03:01:31 localhost ansible-async_wrapper.py[51570]: Module complete (51570) Nov 28 03:01:32 localhost ansible-async_wrapper.py[51569]: Done in kid B. Nov 28 03:01:38 localhost python3[51941]: ansible-ansible.legacy.async_status Invoked with jid=4007624472.51566 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:01:38 localhost python3[51957]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:01:39 localhost python3[51973]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:01:39 localhost python3[52021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:01:40 localhost python3[52064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316899.2812352-84370-231143398356466/source _original_basename=tmpn8xo65m6 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:01:40 localhost python3[52094]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:01:41 localhost python3[52198]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 28 03:01:42 localhost python3[52217]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 03:01:42 localhost python3[52233]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005538515 step=1 update_config_hash_only=False Nov 28 03:01:43 localhost python3[52249]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:01:43 localhost python3[52265]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 28 03:01:44 localhost python3[52281]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 28 03:01:45 localhost python3[52321]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False Nov 28 03:01:45 localhost podman[52501]: 2025-11-28 08:01:45.742058736 +0000 UTC m=+0.069480866 container create 8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-collectd, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true) Nov 28 03:01:45 localhost podman[52490]: 2025-11-28 08:01:45.777781737 +0000 UTC m=+0.119662449 container create d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, release=1761123044, container_name=container-puppet-nova_libvirt, build-date=2025-11-19T00:35:22Z, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:01:45 localhost systemd[1]: Started libpod-conmon-8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10.scope. Nov 28 03:01:45 localhost podman[52490]: 2025-11-28 08:01:45.701494003 +0000 UTC m=+0.043374695 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:01:45 localhost podman[52526]: 2025-11-28 08:01:45.805164866 +0000 UTC m=+0.112371137 container create 2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=container-puppet-metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:01:45 localhost podman[52501]: 2025-11-28 08:01:45.707426306 +0000 UTC m=+0.034848456 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 28 03:01:45 localhost systemd[1]: Started libcrun container. Nov 28 03:01:45 localhost systemd[1]: Started libpod-conmon-d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b.scope. Nov 28 03:01:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2465f602934c9b49eec4e0598b6266084474df0f2da0f1de92a72390c7a9be21/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:45 localhost systemd[1]: Started libpod-conmon-2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447.scope. Nov 28 03:01:45 localhost systemd[1]: Started libcrun container. Nov 28 03:01:45 localhost systemd[1]: Started libcrun container. Nov 28 03:01:45 localhost podman[52531]: 2025-11-28 08:01:45.733849207 +0000 UTC m=+0.029341287 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 28 03:01:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8beb7bd0728a5185ae08edcb0afeede0750b5c1acd8c5a453f776b712778919/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:45 localhost podman[52526]: 2025-11-28 08:01:45.73775594 +0000 UTC m=+0.044962231 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 28 03:01:45 localhost podman[52501]: 2025-11-28 08:01:45.837708495 +0000 UTC m=+0.165130625 container init 8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_puppet_step1, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:01:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d267351eb91c27e496fa400ef9055b36048428ec01962767ba6b671d1258ac4/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:45 localhost podman[52526]: 2025-11-28 08:01:45.842317008 +0000 UTC m=+0.149523279 container init 2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, vcs-type=git, container_name=container-puppet-metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team) Nov 28 03:01:45 localhost podman[52501]: 2025-11-28 08:01:45.848903431 +0000 UTC m=+0.176325551 container start 8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, io.openshift.expose-services=, release=1761123044, config_id=tripleo_puppet_step1, name=rhosp17/openstack-collectd, container_name=container-puppet-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.) Nov 28 03:01:45 localhost podman[52526]: 2025-11-28 08:01:45.851564689 +0000 UTC m=+0.158770960 container start 2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, container_name=container-puppet-metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, release=1761123044, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z) Nov 28 03:01:45 localhost podman[52526]: 2025-11-28 08:01:45.852353682 +0000 UTC m=+0.159559953 container attach 2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, tcib_managed=true, container_name=container-puppet-metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:01:45 localhost podman[52501]: 2025-11-28 08:01:45.849126598 +0000 UTC m=+0.176548738 container attach 8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=container-puppet-collectd, version=17.1.12, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Nov 28 03:01:45 localhost podman[52538]: 2025-11-28 08:01:45.769518746 +0000 UTC m=+0.051169052 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 28 03:01:46 localhost systemd[1]: tmp-crun.jsdx86.mount: Deactivated successfully. Nov 28 03:01:47 localhost podman[52538]: 2025-11-28 08:01:47.187143726 +0000 UTC m=+1.468794062 container create ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, container_name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Nov 28 03:01:47 localhost podman[52490]: 2025-11-28 08:01:47.21202581 +0000 UTC m=+1.553906542 container init d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, container_name=container-puppet-nova_libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_puppet_step1) Nov 28 03:01:47 localhost podman[52490]: 2025-11-28 08:01:47.222193407 +0000 UTC m=+1.564074129 container start d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Nov 28 03:01:47 localhost podman[52490]: 2025-11-28 08:01:47.222899557 +0000 UTC m=+1.564780339 container attach d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git) Nov 28 03:01:47 localhost systemd[1]: Started libpod-conmon-ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0.scope. Nov 28 03:01:47 localhost systemd[1]: Started libcrun container. Nov 28 03:01:47 localhost podman[52531]: 2025-11-28 08:01:47.256708583 +0000 UTC m=+1.552200693 container create 3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-crond, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com) Nov 28 03:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e7cf6abe6a2bbfc58462ae307ac9362023c413708070730336bba274ac12e7/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29e7cf6abe6a2bbfc58462ae307ac9362023c413708070730336bba274ac12e7/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:47 localhost podman[52538]: 2025-11-28 08:01:47.277344105 +0000 UTC m=+1.558994401 container init ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Nov 28 03:01:47 localhost podman[52538]: 2025-11-28 08:01:47.286288816 +0000 UTC m=+1.567939142 container start ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, vcs-type=git, container_name=container-puppet-iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044) Nov 28 03:01:47 localhost podman[52538]: 2025-11-28 08:01:47.287000806 +0000 UTC m=+1.568651182 container attach ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public) Nov 28 03:01:47 localhost systemd[1]: Started libpod-conmon-3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751.scope. Nov 28 03:01:47 localhost systemd[1]: Started libcrun container. Nov 28 03:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77782651c01fa3d8af8a79c02d3312e7fed09a9087964da1a7c959a65a9214b8/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:47 localhost podman[52531]: 2025-11-28 08:01:47.366113072 +0000 UTC m=+1.661605132 container init 3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=container-puppet-crond, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container) Nov 28 03:01:47 localhost podman[52531]: 2025-11-28 08:01:47.372988233 +0000 UTC m=+1.668480293 container start 3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=container-puppet-crond, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container) Nov 28 03:01:47 localhost podman[52531]: 2025-11-28 08:01:47.374079785 +0000 UTC m=+1.669571875 container attach 3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, container_name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=) Nov 28 03:01:48 localhost podman[52409]: 2025-11-28 08:01:45.58992135 +0000 UTC m=+0.041891822 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Nov 28 03:01:48 localhost podman[52721]: 2025-11-28 08:01:48.183582225 +0000 UTC m=+0.060071772 container create 904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, url=https://www.redhat.com, version=17.1.12, vcs-type=git, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-central-container, architecture=x86_64, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:01:48 localhost systemd[1]: Started libpod-conmon-904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273.scope. Nov 28 03:01:48 localhost systemd[1]: Started libcrun container. Nov 28 03:01:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af4441ca58e5a3dae70e850402577fe72fc0370c205d9690db9c04c01d30a59b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:48 localhost podman[52721]: 2025-11-28 08:01:48.243549973 +0000 UTC m=+0.120039550 container init 904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ceilometer, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-central, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044) Nov 28 03:01:48 localhost podman[52721]: 2025-11-28 08:01:48.150256493 +0000 UTC m=+0.026746070 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Nov 28 03:01:48 localhost podman[52721]: 2025-11-28 08:01:48.252559626 +0000 UTC m=+0.129049183 container start 904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, build-date=2025-11-19T00:11:59Z, tcib_managed=true, config_id=tripleo_puppet_step1, name=rhosp17/openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}) Nov 28 03:01:48 localhost podman[52721]: 2025-11-28 08:01:48.253127222 +0000 UTC m=+0.129616799 container attach 904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-ceilometer-central-container, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, container_name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-central) Nov 28 03:01:48 localhost ovs-vsctl[52785]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Nov 28 03:01:49 localhost puppet-user[52636]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:49 localhost puppet-user[52636]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:49 localhost puppet-user[52636]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:49 localhost puppet-user[52636]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52636]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:49 localhost puppet-user[52636]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52657]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:49 localhost puppet-user[52657]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:49 localhost puppet-user[52657]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52636]: Notice: Accepting previously invalid value for target type 'Integer' Nov 28 03:01:49 localhost puppet-user[52657]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:49 localhost puppet-user[52657]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52636]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.12 seconds Nov 28 03:01:49 localhost puppet-user[52638]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:49 localhost puppet-user[52638]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:49 localhost puppet-user[52638]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:49 localhost puppet-user[52638]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root' Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root' Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755' Nov 28 03:01:49 localhost puppet-user[52675]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:49 localhost puppet-user[52675]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created Nov 28 03:01:49 localhost puppet-user[52675]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:49 localhost puppet-user[52675]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52689]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:49 localhost puppet-user[52689]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:49 localhost puppet-user[52689]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:49 localhost puppet-user[52689]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}8b21629c3c588c101e32eb798e9e14b646a0cfd6fc622da2fa0b582fa1678bbf' Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created Nov 28 03:01:49 localhost puppet-user[52636]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created Nov 28 03:01:49 localhost puppet-user[52636]: Notice: Applied catalog in 0.02 seconds Nov 28 03:01:49 localhost puppet-user[52636]: Application: Nov 28 03:01:49 localhost puppet-user[52636]: Initial environment: production Nov 28 03:01:49 localhost puppet-user[52636]: Converged environment: production Nov 28 03:01:49 localhost puppet-user[52636]: Run mode: user Nov 28 03:01:49 localhost puppet-user[52636]: Changes: Nov 28 03:01:49 localhost puppet-user[52636]: Total: 7 Nov 28 03:01:49 localhost puppet-user[52636]: Events: Nov 28 03:01:49 localhost puppet-user[52636]: Success: 7 Nov 28 03:01:49 localhost puppet-user[52636]: Total: 7 Nov 28 03:01:49 localhost puppet-user[52636]: Resources: Nov 28 03:01:49 localhost puppet-user[52636]: Skipped: 13 Nov 28 03:01:49 localhost puppet-user[52636]: Changed: 5 Nov 28 03:01:49 localhost puppet-user[52636]: Out of sync: 5 Nov 28 03:01:49 localhost puppet-user[52636]: Total: 20 Nov 28 03:01:49 localhost puppet-user[52636]: Time: Nov 28 03:01:49 localhost puppet-user[52636]: File: 0.01 Nov 28 03:01:49 localhost puppet-user[52636]: Transaction evaluation: 0.02 Nov 28 03:01:49 localhost puppet-user[52636]: Catalog application: 0.02 Nov 28 03:01:49 localhost puppet-user[52636]: Config retrieval: 0.16 Nov 28 03:01:49 localhost puppet-user[52636]: Last run: 1764316909 Nov 28 03:01:49 localhost puppet-user[52636]: Total: 0.02 Nov 28 03:01:49 localhost puppet-user[52636]: Version: Nov 28 03:01:49 localhost puppet-user[52636]: Config: 1764316909 Nov 28 03:01:49 localhost puppet-user[52636]: Puppet: 7.10.0 Nov 28 03:01:49 localhost puppet-user[52638]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:49 localhost puppet-user[52638]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52675]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:49 localhost puppet-user[52675]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52689]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:49 localhost puppet-user[52689]: (file & line not available) Nov 28 03:01:49 localhost puppet-user[52689]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.07 seconds Nov 28 03:01:49 localhost puppet-user[52675]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.10 seconds Nov 28 03:01:49 localhost puppet-user[52689]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0' Nov 28 03:01:49 localhost puppet-user[52689]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created Nov 28 03:01:49 localhost puppet-user[52689]: Notice: Applied catalog in 0.04 seconds Nov 28 03:01:49 localhost puppet-user[52689]: Application: Nov 28 03:01:49 localhost puppet-user[52689]: Initial environment: production Nov 28 03:01:49 localhost puppet-user[52689]: Converged environment: production Nov 28 03:01:49 localhost puppet-user[52689]: Run mode: user Nov 28 03:01:49 localhost puppet-user[52689]: Changes: Nov 28 03:01:49 localhost puppet-user[52689]: Total: 2 Nov 28 03:01:49 localhost puppet-user[52689]: Events: Nov 28 03:01:49 localhost puppet-user[52689]: Success: 2 Nov 28 03:01:49 localhost puppet-user[52689]: Total: 2 Nov 28 03:01:49 localhost puppet-user[52689]: Resources: Nov 28 03:01:49 localhost puppet-user[52689]: Changed: 2 Nov 28 03:01:49 localhost puppet-user[52689]: Out of sync: 2 Nov 28 03:01:49 localhost puppet-user[52689]: Skipped: 7 Nov 28 03:01:49 localhost puppet-user[52689]: Total: 9 Nov 28 03:01:49 localhost puppet-user[52689]: Time: Nov 28 03:01:49 localhost puppet-user[52689]: File: 0.00 Nov 28 03:01:49 localhost puppet-user[52689]: Cron: 0.01 Nov 28 03:01:49 localhost puppet-user[52689]: Transaction evaluation: 0.04 Nov 28 03:01:49 localhost puppet-user[52689]: Catalog application: 0.04 Nov 28 03:01:49 localhost puppet-user[52689]: Config retrieval: 0.09 Nov 28 03:01:49 localhost puppet-user[52689]: Last run: 1764316909 Nov 28 03:01:49 localhost puppet-user[52689]: Total: 0.04 Nov 28 03:01:49 localhost puppet-user[52689]: Version: Nov 28 03:01:49 localhost puppet-user[52689]: Config: 1764316909 Nov 28 03:01:49 localhost puppet-user[52689]: Puppet: 7.10.0 Nov 28 03:01:49 localhost puppet-user[52675]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully Nov 28 03:01:49 localhost puppet-user[52675]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \ Nov 28 03:01:49 localhost puppet-user[52657]: in a future release. Use nova::cinder::os_region_name instead Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \ Nov 28 03:01:49 localhost puppet-user[52657]: in a future release. Use nova::cinder::catalog_info instead Nov 28 03:01:49 localhost puppet-user[52675]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41) Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5) Nov 28 03:01:49 localhost systemd[1]: libpod-2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447.scope: Deactivated successfully. Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5) Nov 28 03:01:49 localhost systemd[1]: libpod-2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447.scope: Consumed 2.192s CPU time. Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5) Nov 28 03:01:49 localhost puppet-user[52638]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.37 seconds Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set. Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance Nov 28 03:01:49 localhost podman[53135]: 2025-11-28 08:01:49.561972189 +0000 UTC m=+0.035483855 container died 2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, container_name=container-puppet-metrics_qdr, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd) Nov 28 03:01:49 localhost systemd[1]: tmp-crun.c2NAS2.mount: Deactivated successfully. Nov 28 03:01:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:49 localhost systemd[1]: libpod-3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751.scope: Deactivated successfully. Nov 28 03:01:49 localhost systemd[1]: libpod-3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751.scope: Consumed 2.038s CPU time. Nov 28 03:01:49 localhost podman[52531]: 2025-11-28 08:01:49.621141764 +0000 UTC m=+3.916633874 container died 3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, container_name=container-puppet-crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640' Nov 28 03:01:49 localhost podman[53135]: 2025-11-28 08:01:49.676167429 +0000 UTC m=+0.149679085 container cleanup 2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, tcib_managed=true, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container) Nov 28 03:01:49 localhost systemd[1]: libpod-conmon-2b4ea332b462b808784dc8956e5a9441745ce5065c88da8e21e37d1ee9bf6447.scope: Deactivated successfully. Nov 28 03:01:49 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::qdr#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 28 03:01:49 localhost puppet-user[52657]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used. Nov 28 03:01:49 localhost puppet-user[52675]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully Nov 28 03:01:49 localhost puppet-user[52675]: Notice: Applied catalog in 0.43 seconds Nov 28 03:01:49 localhost puppet-user[52675]: Application: Nov 28 03:01:49 localhost puppet-user[52675]: Initial environment: production Nov 28 03:01:49 localhost puppet-user[52675]: Converged environment: production Nov 28 03:01:49 localhost puppet-user[52675]: Run mode: user Nov 28 03:01:49 localhost puppet-user[52675]: Changes: Nov 28 03:01:49 localhost puppet-user[52675]: Total: 4 Nov 28 03:01:49 localhost puppet-user[52675]: Events: Nov 28 03:01:49 localhost puppet-user[52675]: Success: 4 Nov 28 03:01:49 localhost puppet-user[52675]: Total: 4 Nov 28 03:01:49 localhost puppet-user[52675]: Resources: Nov 28 03:01:49 localhost puppet-user[52675]: Changed: 4 Nov 28 03:01:49 localhost puppet-user[52675]: Out of sync: 4 Nov 28 03:01:49 localhost puppet-user[52675]: Skipped: 8 Nov 28 03:01:49 localhost puppet-user[52675]: Total: 13 Nov 28 03:01:49 localhost puppet-user[52675]: Time: Nov 28 03:01:49 localhost puppet-user[52675]: File: 0.00 Nov 28 03:01:49 localhost puppet-user[52675]: Exec: 0.05 Nov 28 03:01:49 localhost puppet-user[52675]: Config retrieval: 0.12 Nov 28 03:01:49 localhost puppet-user[52675]: Augeas: 0.38 Nov 28 03:01:49 localhost puppet-user[52675]: Transaction evaluation: 0.43 Nov 28 03:01:49 localhost puppet-user[52675]: Catalog application: 0.43 Nov 28 03:01:49 localhost puppet-user[52675]: Last run: 1764316909 Nov 28 03:01:49 localhost puppet-user[52675]: Total: 0.43 Nov 28 03:01:49 localhost puppet-user[52675]: Version: Nov 28 03:01:49 localhost puppet-user[52675]: Config: 1764316909 Nov 28 03:01:49 localhost puppet-user[52675]: Puppet: 7.10.0 Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed Nov 28 03:01:49 localhost podman[53164]: 2025-11-28 08:01:49.737227239 +0000 UTC m=+0.106415053 container cleanup 3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, build-date=2025-11-18T22:49:32Z, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, container_name=container-puppet-crond, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron) Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed Nov 28 03:01:49 localhost systemd[1]: libpod-conmon-3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751.scope: Deactivated successfully. Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750' Nov 28 03:01:49 localhost systemd[1]: var-lib-containers-storage-overlay-77782651c01fa3d8af8a79c02d3312e7fed09a9087964da1a7c959a65a9214b8-merged.mount: Deactivated successfully. Nov 28 03:01:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3139c22a6742d68c52f0403d373ba7e8f851d424310f3264e609af6544368751-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:49 localhost systemd[1]: var-lib-containers-storage-overlay-c8beb7bd0728a5185ae08edcb0afeede0750b5c1acd8c5a453f776b712778919-merged.mount: Deactivated successfully. Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed Nov 28 03:01:49 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62' Nov 28 03:01:49 localhost puppet-user[52638]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed Nov 28 03:01:49 localhost puppet-user[52638]: Notice: Applied catalog in 0.26 seconds Nov 28 03:01:49 localhost puppet-user[52638]: Application: Nov 28 03:01:49 localhost puppet-user[52638]: Initial environment: production Nov 28 03:01:49 localhost puppet-user[52638]: Converged environment: production Nov 28 03:01:49 localhost puppet-user[52638]: Run mode: user Nov 28 03:01:49 localhost puppet-user[52638]: Changes: Nov 28 03:01:49 localhost puppet-user[52638]: Total: 43 Nov 28 03:01:49 localhost puppet-user[52638]: Events: Nov 28 03:01:49 localhost puppet-user[52638]: Success: 43 Nov 28 03:01:49 localhost puppet-user[52638]: Total: 43 Nov 28 03:01:49 localhost puppet-user[52638]: Resources: Nov 28 03:01:49 localhost puppet-user[52638]: Skipped: 14 Nov 28 03:01:49 localhost puppet-user[52638]: Changed: 38 Nov 28 03:01:49 localhost puppet-user[52638]: Out of sync: 38 Nov 28 03:01:49 localhost puppet-user[52638]: Total: 82 Nov 28 03:01:49 localhost puppet-user[52638]: Time: Nov 28 03:01:49 localhost puppet-user[52638]: Concat fragment: 0.00 Nov 28 03:01:49 localhost puppet-user[52638]: Concat file: 0.00 Nov 28 03:01:49 localhost puppet-user[52638]: File: 0.09 Nov 28 03:01:49 localhost puppet-user[52638]: Transaction evaluation: 0.25 Nov 28 03:01:49 localhost puppet-user[52638]: Catalog application: 0.26 Nov 28 03:01:49 localhost puppet-user[52638]: Config retrieval: 0.44 Nov 28 03:01:49 localhost puppet-user[52638]: Last run: 1764316909 Nov 28 03:01:49 localhost puppet-user[52638]: Total: 0.26 Nov 28 03:01:49 localhost puppet-user[52638]: Version: Nov 28 03:01:49 localhost puppet-user[52638]: Config: 1764316909 Nov 28 03:01:49 localhost puppet-user[52638]: Puppet: 7.10.0 Nov 28 03:01:50 localhost systemd[1]: libpod-ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0.scope: Deactivated successfully. Nov 28 03:01:50 localhost systemd[1]: libpod-ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0.scope: Consumed 2.510s CPU time. Nov 28 03:01:50 localhost podman[52538]: 2025-11-28 08:01:50.070759433 +0000 UTC m=+4.352409719 container died ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, config_id=tripleo_puppet_step1, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=container-puppet-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:01:50 localhost podman[53295]: 2025-11-28 08:01:50.094424312 +0000 UTC m=+0.085411311 container create 9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-rsyslog, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z) Nov 28 03:01:50 localhost systemd[1]: Started libpod-conmon-9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5.scope. Nov 28 03:01:50 localhost systemd[1]: Started libcrun container. Nov 28 03:01:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69bca6b1ae1a510e610471f91dc39084eac5a14908c47996b36473212637590d/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:50 localhost podman[53295]: 2025-11-28 08:01:50.14098579 +0000 UTC m=+0.131972779 container init 9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-rsyslog, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, build-date=2025-11-18T22:49:49Z, config_id=tripleo_puppet_step1) Nov 28 03:01:50 localhost podman[53295]: 2025-11-28 08:01:50.06210494 +0000 UTC m=+0.053091929 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 28 03:01:50 localhost podman[53295]: 2025-11-28 08:01:50.166502114 +0000 UTC m=+0.157489103 container start 9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=container-puppet-rsyslog, vcs-type=git) Nov 28 03:01:50 localhost podman[53295]: 2025-11-28 08:01:50.166825483 +0000 UTC m=+0.157812492 container attach 9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, release=1761123044, container_name=container-puppet-rsyslog, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., tcib_managed=true) Nov 28 03:01:50 localhost podman[53337]: 2025-11-28 08:01:50.17323541 +0000 UTC m=+0.092262001 container cleanup ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-iscsid, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, container_name=container-puppet-iscsid, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:01:50 localhost systemd[1]: libpod-conmon-ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0.scope: Deactivated successfully. Nov 28 03:01:50 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::iscsid#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 28 03:01:50 localhost systemd[1]: libpod-8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10.scope: Deactivated successfully. Nov 28 03:01:50 localhost systemd[1]: libpod-8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10.scope: Consumed 2.713s CPU time. Nov 28 03:01:50 localhost podman[52501]: 2025-11-28 08:01:50.209716983 +0000 UTC m=+4.537139113 container died 8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Nov 28 03:01:50 localhost podman[53343]: 2025-11-28 08:01:50.223043372 +0000 UTC m=+0.129379333 container create ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 28 03:01:50 localhost systemd[1]: Started libpod-conmon-ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09.scope. Nov 28 03:01:50 localhost podman[53418]: 2025-11-28 08:01:50.267533199 +0000 UTC m=+0.052398139 container cleanup 8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, distribution-scope=public, version=17.1.12) Nov 28 03:01:50 localhost puppet-user[52657]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 1.20 seconds Nov 28 03:01:50 localhost systemd[1]: libpod-conmon-8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10.scope: Deactivated successfully. Nov 28 03:01:50 localhost systemd[1]: Started libcrun container. Nov 28 03:01:50 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 28 03:01:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce8c6ec24615a8ac03ae2a4194714a4f44afbdc43ba4491ff44c91e34e068e5/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:50 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce8c6ec24615a8ac03ae2a4194714a4f44afbdc43ba4491ff44c91e34e068e5/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:50 localhost podman[53343]: 2025-11-28 08:01:50.180352837 +0000 UTC m=+0.086688808 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 28 03:01:50 localhost podman[53343]: 2025-11-28 08:01:50.285558835 +0000 UTC m=+0.191894786 container init ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, container_name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Nov 28 03:01:50 localhost podman[53343]: 2025-11-28 08:01:50.291986592 +0000 UTC m=+0.198322543 container start ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., tcib_managed=true, container_name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:01:50 localhost podman[53343]: 2025-11-28 08:01:50.292178768 +0000 UTC m=+0.198514729 container attach ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-ovn_controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:50 localhost puppet-user[52767]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:50 localhost puppet-user[52767]: (file & line not available) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:50 localhost puppet-user[52767]: (file & line not available) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36) Nov 28 03:01:50 localhost puppet-user[52767]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26) Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}e8f4c9c311633f219a6b4c8a97d1389467ae0d86e6640d015eb10a4c73ac6b8b' Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe' Nov 28 03:01:50 localhost puppet-user[52657]: Warning: Empty environment setting 'TLS_PASSWORD' Nov 28 03:01:50 localhost puppet-user[52657]: (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182) Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}ae9c4ab6bedd07e63d6f2c3a5743334d26ea3ed4d1f695ab855f72927fdb71bc' Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.36 seconds Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created Nov 28 03:01:50 localhost systemd[1]: var-lib-containers-storage-overlay-29e7cf6abe6a2bbfc58462ae307ac9362023c413708070730336bba274ac12e7-merged.mount: Deactivated successfully. Nov 28 03:01:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed41a5a8171688821f2c74039d5b07173339f60e4defb1c8fdfea4439a6b75c0-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:50 localhost systemd[1]: var-lib-containers-storage-overlay-2465f602934c9b49eec4e0598b6266084474df0f2da0f1de92a72390c7a9be21-merged.mount: Deactivated successfully. Nov 28 03:01:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8834e3582a8fa1c94bce5b1886005ace73c6931a23b894267022dfd791337f10-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created Nov 28 03:01:50 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created Nov 28 03:01:50 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created Nov 28 03:01:51 localhost puppet-user[52767]: Notice: Applied catalog in 0.44 seconds Nov 28 03:01:51 localhost puppet-user[52767]: Application: Nov 28 03:01:51 localhost puppet-user[52767]: Initial environment: production Nov 28 03:01:51 localhost puppet-user[52767]: Converged environment: production Nov 28 03:01:51 localhost puppet-user[52767]: Run mode: user Nov 28 03:01:51 localhost puppet-user[52767]: Changes: Nov 28 03:01:51 localhost puppet-user[52767]: Total: 31 Nov 28 03:01:51 localhost puppet-user[52767]: Events: Nov 28 03:01:51 localhost puppet-user[52767]: Success: 31 Nov 28 03:01:51 localhost puppet-user[52767]: Total: 31 Nov 28 03:01:51 localhost puppet-user[52767]: Resources: Nov 28 03:01:51 localhost puppet-user[52767]: Skipped: 22 Nov 28 03:01:51 localhost puppet-user[52767]: Changed: 31 Nov 28 03:01:51 localhost puppet-user[52767]: Out of sync: 31 Nov 28 03:01:51 localhost puppet-user[52767]: Total: 151 Nov 28 03:01:51 localhost puppet-user[52767]: Time: Nov 28 03:01:51 localhost puppet-user[52767]: Package: 0.02 Nov 28 03:01:51 localhost puppet-user[52767]: Ceilometer config: 0.34 Nov 28 03:01:51 localhost puppet-user[52767]: Transaction evaluation: 0.43 Nov 28 03:01:51 localhost puppet-user[52767]: Config retrieval: 0.44 Nov 28 03:01:51 localhost puppet-user[52767]: Catalog application: 0.44 Nov 28 03:01:51 localhost puppet-user[52767]: Last run: 1764316911 Nov 28 03:01:51 localhost puppet-user[52767]: Resources: 0.00 Nov 28 03:01:51 localhost puppet-user[52767]: Total: 0.44 Nov 28 03:01:51 localhost puppet-user[52767]: Version: Nov 28 03:01:51 localhost puppet-user[52767]: Config: 1764316910 Nov 28 03:01:51 localhost puppet-user[52767]: Puppet: 7.10.0 Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created Nov 28 03:01:51 localhost systemd[1]: libpod-904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273.scope: Deactivated successfully. Nov 28 03:01:51 localhost systemd[1]: libpod-904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273.scope: Consumed 2.956s CPU time. Nov 28 03:01:51 localhost podman[52721]: 2025-11-28 08:01:51.6403046 +0000 UTC m=+3.516794187 container died 904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, com.redhat.component=openstack-ceilometer-central-container, container_name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created Nov 28 03:01:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created Nov 28 03:01:51 localhost systemd[1]: var-lib-containers-storage-overlay-af4441ca58e5a3dae70e850402577fe72fc0370c205d9690db9c04c01d30a59b-merged.mount: Deactivated successfully. Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created Nov 28 03:01:51 localhost podman[53640]: 2025-11-28 08:01:51.745198948 +0000 UTC m=+0.094996390 container cleanup 904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, url=https://www.redhat.com, container_name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-central-container, config_id=tripleo_puppet_step1, vcs-type=git, name=rhosp17/openstack-ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:59Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central) Nov 28 03:01:51 localhost systemd[1]: libpod-conmon-904f45bf1db4e9f2cbbbf5ea95d42519ada7d6f43d6be47e85c0a4d6697b9273.scope: Deactivated successfully. Nov 28 03:01:51 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::ceilometer::agent::polling#012include tripleo::profile::base::ceilometer::agent::polling#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1 Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created Nov 28 03:01:51 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created Nov 28 03:01:52 localhost puppet-user[53396]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:52 localhost puppet-user[53396]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:52 localhost puppet-user[53396]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:52 localhost puppet-user[53396]: (file & line not available) Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created Nov 28 03:01:52 localhost puppet-user[53396]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:52 localhost puppet-user[53396]: (file & line not available) Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:52 localhost puppet-user[53521]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:52 localhost puppet-user[53521]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:52 localhost puppet-user[53521]: (file & line not available) Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:52 localhost puppet-user[53521]: (file & line not available) Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created Nov 28 03:01:52 localhost puppet-user[53396]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.23 seconds Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}af00b55795dabd7a8ca15fb762e773701eb5c91ea4ae135b9bcdde564d7077dd' Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created Nov 28 03:01:52 localhost puppet-user[53396]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2' Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created Nov 28 03:01:52 localhost puppet-user[53396]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b' Nov 28 03:01:52 localhost puppet-user[53521]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.30 seconds Nov 28 03:01:52 localhost puppet-user[53396]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}6044185b1da867517684b275c4d283584d91a27b22c4084e92ff9a2cc819bcca' Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created Nov 28 03:01:52 localhost puppet-user[53396]: Notice: Applied catalog in 0.11 seconds Nov 28 03:01:52 localhost puppet-user[53396]: Application: Nov 28 03:01:52 localhost puppet-user[53396]: Initial environment: production Nov 28 03:01:52 localhost puppet-user[53396]: Converged environment: production Nov 28 03:01:52 localhost puppet-user[53396]: Run mode: user Nov 28 03:01:52 localhost puppet-user[53396]: Changes: Nov 28 03:01:52 localhost puppet-user[53396]: Total: 3 Nov 28 03:01:52 localhost puppet-user[53396]: Events: Nov 28 03:01:52 localhost puppet-user[53396]: Success: 3 Nov 28 03:01:52 localhost puppet-user[53396]: Total: 3 Nov 28 03:01:52 localhost puppet-user[53396]: Resources: Nov 28 03:01:52 localhost puppet-user[53396]: Skipped: 11 Nov 28 03:01:52 localhost puppet-user[53396]: Changed: 3 Nov 28 03:01:52 localhost puppet-user[53396]: Out of sync: 3 Nov 28 03:01:52 localhost puppet-user[53396]: Total: 25 Nov 28 03:01:52 localhost puppet-user[53396]: Time: Nov 28 03:01:52 localhost puppet-user[53396]: Concat file: 0.00 Nov 28 03:01:52 localhost puppet-user[53396]: Concat fragment: 0.00 Nov 28 03:01:52 localhost puppet-user[53396]: File: 0.01 Nov 28 03:01:52 localhost puppet-user[53396]: Transaction evaluation: 0.10 Nov 28 03:01:52 localhost puppet-user[53396]: Catalog application: 0.11 Nov 28 03:01:52 localhost puppet-user[53396]: Config retrieval: 0.29 Nov 28 03:01:52 localhost puppet-user[53396]: Last run: 1764316912 Nov 28 03:01:52 localhost puppet-user[53396]: Total: 0.11 Nov 28 03:01:52 localhost puppet-user[53396]: Version: Nov 28 03:01:52 localhost puppet-user[53396]: Config: 1764316912 Nov 28 03:01:52 localhost puppet-user[53396]: Puppet: 7.10.0 Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53796]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642 Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53798]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53814]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108 Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53819]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005538515.localdomain Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005538515.novalocal' to 'np0005538515.localdomain' Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53833]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53837]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000 Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created Nov 28 03:01:52 localhost systemd[1]: libpod-9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5.scope: Deactivated successfully. Nov 28 03:01:52 localhost systemd[1]: libpod-9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5.scope: Consumed 2.420s CPU time. Nov 28 03:01:52 localhost ovs-vsctl[53844]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60 Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created Nov 28 03:01:52 localhost podman[53295]: 2025-11-28 08:01:52.754840403 +0000 UTC m=+2.745827402 container died 9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z) Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53858]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created Nov 28 03:01:52 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53864]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000 Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53866]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0 Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53868]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:72:ce:0c Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53870]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53872]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created Nov 28 03:01:52 localhost ovs-vsctl[53874]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0 Nov 28 03:01:52 localhost puppet-user[53521]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created Nov 28 03:01:53 localhost puppet-user[53521]: Notice: Applied catalog in 0.58 seconds Nov 28 03:01:53 localhost puppet-user[53521]: Application: Nov 28 03:01:53 localhost puppet-user[53521]: Initial environment: production Nov 28 03:01:53 localhost puppet-user[53521]: Converged environment: production Nov 28 03:01:53 localhost puppet-user[53521]: Run mode: user Nov 28 03:01:53 localhost puppet-user[53521]: Changes: Nov 28 03:01:53 localhost puppet-user[53521]: Total: 14 Nov 28 03:01:53 localhost puppet-user[53521]: Events: Nov 28 03:01:53 localhost puppet-user[53521]: Success: 14 Nov 28 03:01:53 localhost puppet-user[53521]: Total: 14 Nov 28 03:01:53 localhost puppet-user[53521]: Resources: Nov 28 03:01:53 localhost puppet-user[53521]: Skipped: 12 Nov 28 03:01:53 localhost puppet-user[53521]: Changed: 14 Nov 28 03:01:53 localhost puppet-user[53521]: Out of sync: 14 Nov 28 03:01:53 localhost puppet-user[53521]: Total: 29 Nov 28 03:01:53 localhost puppet-user[53521]: Time: Nov 28 03:01:53 localhost puppet-user[53521]: Exec: 0.02 Nov 28 03:01:53 localhost puppet-user[53521]: Config retrieval: 0.33 Nov 28 03:01:53 localhost puppet-user[53521]: Vs config: 0.47 Nov 28 03:01:53 localhost puppet-user[53521]: Transaction evaluation: 0.57 Nov 28 03:01:53 localhost puppet-user[53521]: Catalog application: 0.58 Nov 28 03:01:53 localhost puppet-user[53521]: Last run: 1764316913 Nov 28 03:01:53 localhost puppet-user[53521]: Total: 0.58 Nov 28 03:01:53 localhost puppet-user[53521]: Version: Nov 28 03:01:53 localhost puppet-user[53521]: Config: 1764316912 Nov 28 03:01:53 localhost puppet-user[53521]: Puppet: 7.10.0 Nov 28 03:01:53 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully Nov 28 03:01:53 localhost systemd[1]: tmp-crun.YsuVt6.mount: Deactivated successfully. Nov 28 03:01:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:53 localhost systemd[1]: var-lib-containers-storage-overlay-69bca6b1ae1a510e610471f91dc39084eac5a14908c47996b36473212637590d-merged.mount: Deactivated successfully. Nov 28 03:01:53 localhost systemd[1]: libpod-ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09.scope: Deactivated successfully. Nov 28 03:01:53 localhost systemd[1]: libpod-ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09.scope: Consumed 2.986s CPU time. Nov 28 03:01:53 localhost podman[53343]: 2025-11-28 08:01:53.528017093 +0000 UTC m=+3.434353044 container died ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, vcs-type=git, tcib_managed=true, container_name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible) Nov 28 03:01:53 localhost podman[53851]: 2025-11-28 08:01:53.7878827 +0000 UTC m=+1.025521769 container cleanup 9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, version=17.1.12, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-rsyslog, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container) Nov 28 03:01:53 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 28 03:01:53 localhost systemd[1]: libpod-conmon-9c924b4b35e410ba8e6e6caa706b1f19805626094797b80857403544aece1ad5.scope: Deactivated successfully. Nov 28 03:01:53 localhost podman[53914]: 2025-11-28 08:01:53.804263407 +0000 UTC m=+0.265044648 container cleanup ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=container-puppet-ovn_controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Nov 28 03:01:53 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully Nov 28 03:01:53 localhost systemd[1]: libpod-conmon-ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09.scope: Deactivated successfully. Nov 28 03:01:53 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::agents::ovn#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 28 03:01:53 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created Nov 28 03:01:53 localhost podman[53554]: 2025-11-28 08:01:50.469904669 +0000 UTC m=+0.037463953 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Nov 28 03:01:53 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created Nov 28 03:01:53 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created Nov 28 03:01:54 localhost podman[53994]: 2025-11-28 08:01:54.053642707 +0000 UTC m=+0.063545573 container create 8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:23:27Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-neutron-server-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 28 03:01:54 localhost systemd[1]: Started libpod-conmon-8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1.scope. Nov 28 03:01:54 localhost systemd[1]: Started libcrun container. Nov 28 03:01:54 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c29bfa5a0679179b90046634e87037ab6ff6f22b5fa7106d9841b0f8caae33b/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff) Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created Nov 28 03:01:54 localhost podman[53994]: 2025-11-28 08:01:54.113125681 +0000 UTC m=+0.123028587 container init 8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-neutron-server, architecture=x86_64, version=17.1.12, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:23:27Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, container_name=container-puppet-neutron, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:01:54 localhost podman[53994]: 2025-11-28 08:01:54.12062557 +0000 UTC m=+0.130528476 container start 8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, name=rhosp17/openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:23:27Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=container-puppet-neutron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-server-container) Nov 28 03:01:54 localhost podman[53994]: 2025-11-28 08:01:54.120989291 +0000 UTC m=+0.130892187 container attach 8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-server-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:23:27Z, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, container_name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, config_id=tripleo_puppet_step1) Nov 28 03:01:54 localhost podman[53994]: 2025-11-28 08:01:54.023434357 +0000 UTC m=+0.033337243 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Nov 28 03:01:54 localhost systemd[1]: var-lib-containers-storage-overlay-9ce8c6ec24615a8ac03ae2a4194714a4f44afbdc43ba4491ff44c91e34e068e5-merged.mount: Deactivated successfully. Nov 28 03:01:54 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ebec36eeb114e633393c34c9aea56cfd1cf07c9c3281b2bb9885574c17cfdc09-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created Nov 28 03:01:54 localhost puppet-user[52657]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4' Nov 28 03:01:54 localhost puppet-user[52657]: Notice: Applied catalog in 4.36 seconds Nov 28 03:01:54 localhost puppet-user[52657]: Application: Nov 28 03:01:54 localhost puppet-user[52657]: Initial environment: production Nov 28 03:01:54 localhost puppet-user[52657]: Converged environment: production Nov 28 03:01:54 localhost puppet-user[52657]: Run mode: user Nov 28 03:01:54 localhost puppet-user[52657]: Changes: Nov 28 03:01:54 localhost puppet-user[52657]: Total: 183 Nov 28 03:01:54 localhost puppet-user[52657]: Events: Nov 28 03:01:54 localhost puppet-user[52657]: Success: 183 Nov 28 03:01:54 localhost puppet-user[52657]: Total: 183 Nov 28 03:01:54 localhost puppet-user[52657]: Resources: Nov 28 03:01:54 localhost puppet-user[52657]: Changed: 183 Nov 28 03:01:54 localhost puppet-user[52657]: Out of sync: 183 Nov 28 03:01:54 localhost puppet-user[52657]: Skipped: 57 Nov 28 03:01:54 localhost puppet-user[52657]: Total: 487 Nov 28 03:01:54 localhost puppet-user[52657]: Time: Nov 28 03:01:54 localhost puppet-user[52657]: Concat fragment: 0.00 Nov 28 03:01:54 localhost puppet-user[52657]: Anchor: 0.00 Nov 28 03:01:54 localhost puppet-user[52657]: File line: 0.00 Nov 28 03:01:54 localhost puppet-user[52657]: Virtlogd config: 0.00 Nov 28 03:01:54 localhost puppet-user[52657]: Virtqemud config: 0.02 Nov 28 03:01:54 localhost puppet-user[52657]: Exec: 0.02 Nov 28 03:01:54 localhost puppet-user[52657]: Virtsecretd config: 0.02 Nov 28 03:01:54 localhost puppet-user[52657]: Virtstoraged config: 0.02 Nov 28 03:01:54 localhost puppet-user[52657]: File: 0.03 Nov 28 03:01:54 localhost puppet-user[52657]: Virtproxyd config: 0.03 Nov 28 03:01:54 localhost puppet-user[52657]: Package: 0.03 Nov 28 03:01:54 localhost puppet-user[52657]: Virtnodedevd config: 0.05 Nov 28 03:01:54 localhost puppet-user[52657]: Augeas: 1.02 Nov 28 03:01:54 localhost puppet-user[52657]: Config retrieval: 1.43 Nov 28 03:01:54 localhost puppet-user[52657]: Last run: 1764316914 Nov 28 03:01:54 localhost puppet-user[52657]: Nova config: 2.93 Nov 28 03:01:54 localhost puppet-user[52657]: Transaction evaluation: 4.34 Nov 28 03:01:54 localhost puppet-user[52657]: Catalog application: 4.36 Nov 28 03:01:54 localhost puppet-user[52657]: Resources: 0.00 Nov 28 03:01:54 localhost puppet-user[52657]: Concat file: 0.00 Nov 28 03:01:54 localhost puppet-user[52657]: Total: 4.36 Nov 28 03:01:54 localhost puppet-user[52657]: Version: Nov 28 03:01:54 localhost puppet-user[52657]: Config: 1764316909 Nov 28 03:01:54 localhost puppet-user[52657]: Puppet: 7.10.0 Nov 28 03:01:55 localhost systemd[1]: libpod-d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b.scope: Deactivated successfully. Nov 28 03:01:55 localhost systemd[1]: libpod-d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b.scope: Consumed 8.260s CPU time. Nov 28 03:01:55 localhost podman[52490]: 2025-11-28 08:01:55.730317768 +0000 UTC m=+10.072198540 container died d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, vcs-type=git, container_name=container-puppet-nova_libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:01:55 localhost systemd[1]: tmp-crun.NfEF9L.mount: Deactivated successfully. Nov 28 03:01:55 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:55 localhost systemd[1]: var-lib-containers-storage-overlay-4d267351eb91c27e496fa400ef9055b36048428ec01962767ba6b671d1258ac4-merged.mount: Deactivated successfully. Nov 28 03:01:55 localhost puppet-user[54024]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass Nov 28 03:01:55 localhost podman[54067]: 2025-11-28 08:01:55.943245315 +0000 UTC m=+0.199649511 container cleanup d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, container_name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com) Nov 28 03:01:55 localhost systemd[1]: libpod-conmon-d184a5420aa167537c4418fe72018d93cc08508bdda98c15877ff895cf99cb9b.scope: Deactivated successfully. Nov 28 03:01:55 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages#012# TODO(emilien): figure how to deal with libvirt profile.#012# We'll probably treat it like we do with Neutron plugins.#012# Until then, just include it in the default nova-compute role.#012include tripleo::profile::base::nova::compute::libvirt#012#012include tripleo::profile::base::nova::libvirt#012#012include tripleo::profile::base::nova::compute::libvirt_guests#012#012include tripleo::profile::base::sshd#012include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:01:56 localhost puppet-user[54024]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:01:56 localhost puppet-user[54024]: (file: /etc/puppet/hiera.yaml) Nov 28 03:01:56 localhost puppet-user[54024]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:01:56 localhost puppet-user[54024]: (file & line not available) Nov 28 03:01:56 localhost puppet-user[54024]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:01:56 localhost puppet-user[54024]: (file & line not available) Nov 28 03:01:56 localhost puppet-user[54024]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37) Nov 28 03:01:56 localhost puppet-user[54024]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.67 seconds Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created Nov 28 03:01:56 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created Nov 28 03:01:57 localhost puppet-user[54024]: Notice: Applied catalog in 0.44 seconds Nov 28 03:01:57 localhost puppet-user[54024]: Application: Nov 28 03:01:57 localhost puppet-user[54024]: Initial environment: production Nov 28 03:01:57 localhost puppet-user[54024]: Converged environment: production Nov 28 03:01:57 localhost puppet-user[54024]: Run mode: user Nov 28 03:01:57 localhost puppet-user[54024]: Changes: Nov 28 03:01:57 localhost puppet-user[54024]: Total: 33 Nov 28 03:01:57 localhost puppet-user[54024]: Events: Nov 28 03:01:57 localhost puppet-user[54024]: Success: 33 Nov 28 03:01:57 localhost puppet-user[54024]: Total: 33 Nov 28 03:01:57 localhost puppet-user[54024]: Resources: Nov 28 03:01:57 localhost puppet-user[54024]: Skipped: 21 Nov 28 03:01:57 localhost puppet-user[54024]: Changed: 33 Nov 28 03:01:57 localhost puppet-user[54024]: Out of sync: 33 Nov 28 03:01:57 localhost puppet-user[54024]: Total: 155 Nov 28 03:01:57 localhost puppet-user[54024]: Time: Nov 28 03:01:57 localhost puppet-user[54024]: Resources: 0.00 Nov 28 03:01:57 localhost puppet-user[54024]: Ovn metadata agent config: 0.02 Nov 28 03:01:57 localhost puppet-user[54024]: Neutron config: 0.37 Nov 28 03:01:57 localhost puppet-user[54024]: Transaction evaluation: 0.44 Nov 28 03:01:57 localhost puppet-user[54024]: Catalog application: 0.44 Nov 28 03:01:57 localhost puppet-user[54024]: Config retrieval: 0.74 Nov 28 03:01:57 localhost puppet-user[54024]: Last run: 1764316917 Nov 28 03:01:57 localhost puppet-user[54024]: Total: 0.45 Nov 28 03:01:57 localhost puppet-user[54024]: Version: Nov 28 03:01:57 localhost puppet-user[54024]: Config: 1764316916 Nov 28 03:01:57 localhost puppet-user[54024]: Puppet: 7.10.0 Nov 28 03:01:57 localhost systemd[1]: libpod-8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1.scope: Deactivated successfully. Nov 28 03:01:57 localhost systemd[1]: libpod-8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1.scope: Consumed 3.676s CPU time. Nov 28 03:01:57 localhost podman[54208]: 2025-11-28 08:01:57.875844997 +0000 UTC m=+0.050612376 container died 8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:23:27Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-server, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_puppet_step1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, url=https://www.redhat.com, container_name=container-puppet-neutron) Nov 28 03:01:57 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1-userdata-shm.mount: Deactivated successfully. Nov 28 03:01:57 localhost systemd[1]: var-lib-containers-storage-overlay-7c29bfa5a0679179b90046634e87037ab6ff6f22b5fa7106d9841b0f8caae33b-merged.mount: Deactivated successfully. Nov 28 03:01:57 localhost podman[54208]: 2025-11-28 08:01:57.958408073 +0000 UTC m=+0.133175412 container cleanup 8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-server-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, container_name=container-puppet-neutron) Nov 28 03:01:57 localhost systemd[1]: libpod-conmon-8692e42b842ef6461ddaf8f87dcd08c54fe8471c3dd5454dde35d88654a795a1.scope: Deactivated successfully. Nov 28 03:01:57 localhost python3[52321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005538515 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages#012include tripleo::profile::base::neutron::ovn_metadata#012 --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005538515', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1 Nov 28 03:01:58 localhost python3[54260]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:01:59 localhost python3[54292]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:02:00 localhost python3[54342]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:02:00 localhost python3[54385]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316920.050109-84865-133296093000290/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:01 localhost python3[54447]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:02:02 localhost python3[54490]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316920.912403-84865-205667006347731/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:02 localhost python3[54552]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:02:03 localhost python3[54595]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316922.5342379-85024-154985672508809/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:03 localhost python3[54657]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:02:04 localhost python3[54700]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316923.4300764-85054-14470321130955/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:04 localhost python3[54730]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:02:04 localhost systemd[1]: Reloading. Nov 28 03:02:04 localhost systemd-rc-local-generator[54756]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:02:04 localhost systemd-sysv-generator[54760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:02:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:02:04 localhost systemd[1]: Reloading. Nov 28 03:02:05 localhost systemd-rc-local-generator[54792]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:02:05 localhost systemd-sysv-generator[54795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:02:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:02:05 localhost systemd[1]: Starting TripleO Container Shutdown... Nov 28 03:02:05 localhost systemd[1]: Finished TripleO Container Shutdown. Nov 28 03:02:05 localhost python3[54853]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:02:06 localhost python3[54896]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316925.3613842-85176-54340264996382/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:06 localhost python3[54958]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:02:07 localhost python3[55001]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316926.291196-85234-14471150599337/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:07 localhost python3[55031]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:02:07 localhost systemd[1]: Reloading. Nov 28 03:02:07 localhost systemd-sysv-generator[55062]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:02:07 localhost systemd-rc-local-generator[55057]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:02:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:02:07 localhost systemd[1]: Reloading. Nov 28 03:02:07 localhost systemd-sysv-generator[55097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:02:07 localhost systemd-rc-local-generator[55093]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:02:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:02:08 localhost systemd[1]: Starting Create netns directory... Nov 28 03:02:08 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 03:02:08 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 03:02:08 localhost systemd[1]: Finished Create netns directory. Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 6e6d33b0e4909c73f2f7adca3bc870a0 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: 18a2751501986164e709168f53ab57c8 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: f62921da3a3d0eed1be38a46b3ed6ac3 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 185ba876a5902dbf87b8591344afd39d Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 185ba876a5902dbf87b8591344afd39d Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 08c21dad54d1ba598c6e2fae6b853aba Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: 18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:08 localhost python3[55124]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: bbb5ea37891e3118676a78b59837de90 Nov 28 03:02:09 localhost python3[55180]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 28 03:02:10 localhost podman[55218]: 2025-11-28 08:02:10.263095861 +0000 UTC m=+0.079726573 container create 325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr_init_logs, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:02:10 localhost systemd[1]: Started libpod-conmon-325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601.scope. Nov 28 03:02:10 localhost podman[55218]: 2025-11-28 08:02:10.217522983 +0000 UTC m=+0.034153725 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 28 03:02:10 localhost systemd[1]: Started libcrun container. Nov 28 03:02:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a670f87546e9222dc3530777cbcbb6bd2a424665ad22aef150e174bea9c765/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Nov 28 03:02:10 localhost podman[55218]: 2025-11-28 08:02:10.360538218 +0000 UTC m=+0.177168930 container init 325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team) Nov 28 03:02:10 localhost podman[55218]: 2025-11-28 08:02:10.371757067 +0000 UTC m=+0.188387779 container start 325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044) Nov 28 03:02:10 localhost podman[55218]: 2025-11-28 08:02:10.372136489 +0000 UTC m=+0.188767211 container attach 325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, container_name=metrics_qdr_init_logs, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, release=1761123044) Nov 28 03:02:10 localhost systemd[1]: libpod-325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601.scope: Deactivated successfully. Nov 28 03:02:10 localhost podman[55218]: 2025-11-28 08:02:10.381601224 +0000 UTC m=+0.198231936 container died 325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, container_name=metrics_qdr_init_logs, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Nov 28 03:02:10 localhost podman[55237]: 2025-11-28 08:02:10.468826722 +0000 UTC m=+0.078311799 container cleanup 325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, batch=17.1_20251118.1, container_name=metrics_qdr_init_logs) Nov 28 03:02:10 localhost systemd[1]: libpod-conmon-325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601.scope: Deactivated successfully. Nov 28 03:02:10 localhost python3[55180]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd Nov 28 03:02:10 localhost podman[55312]: 2025-11-28 08:02:10.935399734 +0000 UTC m=+0.078553587 container create 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, release=1761123044, build-date=2025-11-18T22:49:46Z, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Nov 28 03:02:10 localhost systemd[1]: Started libpod-conmon-9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.scope. Nov 28 03:02:10 localhost systemd[1]: Started libcrun container. Nov 28 03:02:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/876187b8bc68a02fc79261d7a49dfade5cc37ca730d23b4f758fcf788c522d06/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff) Nov 28 03:02:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/876187b8bc68a02fc79261d7a49dfade5cc37ca730d23b4f758fcf788c522d06/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff) Nov 28 03:02:10 localhost podman[55312]: 2025-11-28 08:02:10.8955865 +0000 UTC m=+0.038740393 image pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 28 03:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:02:11 localhost podman[55312]: 2025-11-28 08:02:11.025833529 +0000 UTC m=+0.168987372 container init 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, distribution-scope=public, container_name=metrics_qdr, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 28 03:02:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:02:11 localhost podman[55312]: 2025-11-28 08:02:11.058866498 +0000 UTC m=+0.202020341 container start 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:02:11 localhost python3[55180]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6e6d33b0e4909c73f2f7adca3bc870a0 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 Nov 28 03:02:11 localhost podman[55334]: 2025-11-28 08:02:11.161914125 +0000 UTC m=+0.093662584 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:02:11 localhost systemd[1]: var-lib-containers-storage-overlay-92a670f87546e9222dc3530777cbcbb6bd2a424665ad22aef150e174bea9c765-merged.mount: Deactivated successfully. Nov 28 03:02:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-325abc01ba4485ffe3dd4f572ea163a2b9aaa7bcf66a88a3ab110fbd81332601-userdata-shm.mount: Deactivated successfully. Nov 28 03:02:11 localhost podman[55334]: 2025-11-28 08:02:11.390161028 +0000 UTC m=+0.321909497 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:02:11 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:02:11 localhost python3[55407]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:11 localhost python3[55423]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:02:12 localhost python3[55484]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764316932.0419774-85354-154070737403857/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:12 localhost python3[55500]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 03:02:12 localhost systemd[1]: Reloading. Nov 28 03:02:13 localhost systemd-rc-local-generator[55521]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:02:13 localhost systemd-sysv-generator[55526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:02:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:02:13 localhost python3[55551]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:02:13 localhost systemd[1]: Reloading. Nov 28 03:02:14 localhost systemd-rc-local-generator[55578]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:02:14 localhost systemd-sysv-generator[55583]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:02:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:02:14 localhost systemd[1]: Starting dnf makecache... Nov 28 03:02:14 localhost systemd[1]: Starting metrics_qdr container... Nov 28 03:02:14 localhost systemd[1]: Started metrics_qdr container. Nov 28 03:02:14 localhost dnf[55591]: Updating Subscription Management repositories. Nov 28 03:02:14 localhost python3[55632]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:16 localhost python3[55753]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005538515 step=1 update_config_hash_only=False Nov 28 03:02:16 localhost dnf[55591]: Failed determining last makecache time. Nov 28 03:02:16 localhost python3[55770]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:02:17 localhost python3[55786]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 28 03:02:21 localhost dnf[55591]: Fast Datapath for RHEL 9 x86_64 (RPMs) 793 B/s | 4.0 kB 00:05 Nov 28 03:02:24 localhost dnf[55591]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_ 1.2 kB/s | 4.0 kB 00:03 Nov 28 03:02:25 localhost dnf[55591]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 30 kB/s | 4.5 kB 00:00 Nov 28 03:02:25 localhost dnf[55591]: Red Hat Enterprise Linux 9 for x86_64 - AppStre 14 kB/s | 4.5 kB 00:00 Nov 28 03:02:29 localhost dnf[55591]: Red Hat Enterprise Linux 9 for x86_64 - High Av 1.1 kB/s | 4.0 kB 00:03 Nov 28 03:02:29 localhost dnf[55591]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 41 kB/s | 4.1 kB 00:00 Nov 28 03:02:30 localhost dnf[55591]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS 3.2 kB/s | 4.1 kB 00:01 Nov 28 03:02:30 localhost dnf[55591]: Metadata cache created. Nov 28 03:02:31 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Nov 28 03:02:31 localhost systemd[1]: Finished dnf makecache. Nov 28 03:02:31 localhost systemd[1]: dnf-makecache.service: Consumed 2.911s CPU time. Nov 28 03:02:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:02:41 localhost podman[55870]: 2025-11-28 08:02:41.980507567 +0000 UTC m=+0.084101925 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Nov 28 03:02:42 localhost podman[55870]: 2025-11-28 08:02:42.217541905 +0000 UTC m=+0.321136233 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr) Nov 28 03:02:42 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:03:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:03:12 localhost podman[55900]: 2025-11-28 08:03:12.967030257 +0000 UTC m=+0.078730412 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Nov 28 03:03:13 localhost podman[55900]: 2025-11-28 08:03:13.159385235 +0000 UTC m=+0.271085440 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:03:13 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:03:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:03:43 localhost systemd[1]: tmp-crun.LwrBSE.mount: Deactivated successfully. Nov 28 03:03:43 localhost podman[56006]: 2025-11-28 08:03:43.978204803 +0000 UTC m=+0.089933631 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=metrics_qdr, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, architecture=x86_64) Nov 28 03:03:44 localhost podman[56006]: 2025-11-28 08:03:44.200504955 +0000 UTC m=+0.312233773 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com) Nov 28 03:03:44 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:04:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:04:14 localhost systemd[1]: tmp-crun.8Z6lC3.mount: Deactivated successfully. Nov 28 03:04:14 localhost podman[56035]: 2025-11-28 08:04:14.980705371 +0000 UTC m=+0.086260487 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=) Nov 28 03:04:15 localhost podman[56035]: 2025-11-28 08:04:15.176421465 +0000 UTC m=+0.281976551 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:04:15 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:04:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:04:45 localhost podman[56140]: 2025-11-28 08:04:45.975760774 +0000 UTC m=+0.086256757 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container) Nov 28 03:04:46 localhost podman[56140]: 2025-11-28 08:04:46.194641627 +0000 UTC m=+0.305137580 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:04:46 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:05:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:05:17 localhost podman[56170]: 2025-11-28 08:05:17.005439372 +0000 UTC m=+0.086691751 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=metrics_qdr, vcs-type=git, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:05:17 localhost podman[56170]: 2025-11-28 08:05:17.196518926 +0000 UTC m=+0.277771355 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true) Nov 28 03:05:17 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:05:30 localhost sshd[56199]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:05:30 localhost sshd[56200]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:05:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:05:47 localhost podman[56278]: 2025-11-28 08:05:47.978647087 +0000 UTC m=+0.089827515 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 28 03:05:48 localhost podman[56278]: 2025-11-28 08:05:48.158870572 +0000 UTC m=+0.270050940 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12) Nov 28 03:05:48 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:06:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:06:18 localhost podman[56308]: 2025-11-28 08:06:18.976099431 +0000 UTC m=+0.085246898 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Nov 28 03:06:19 localhost podman[56308]: 2025-11-28 08:06:19.157524411 +0000 UTC m=+0.266671898 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1) Nov 28 03:06:19 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:06:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:06:49 localhost systemd[1]: tmp-crun.FReXdP.mount: Deactivated successfully. Nov 28 03:06:49 localhost podman[56416]: 2025-11-28 08:06:49.977263797 +0000 UTC m=+0.088380318 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, architecture=x86_64, config_id=tripleo_step1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12) Nov 28 03:06:50 localhost podman[56416]: 2025-11-28 08:06:50.175685737 +0000 UTC m=+0.286802298 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 28 03:06:50 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:06:51 localhost sshd[56445]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:07:06 localhost ceph-osd[33334]: osd.4 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [4,5,3] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:07:08 localhost ceph-osd[33334]: osd.4 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [4,5,3] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:07:10 localhost ceph-osd[33334]: osd.4 pg_epoch: 22 pg[3.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [5,4,0] r=1 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:07:12 localhost ceph-osd[33334]: osd.4 pg_epoch: 24 pg[4.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [3,4,5] r=1 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:07:15 localhost ceph-osd[33334]: osd.4 pg_epoch: 26 pg[5.0( empty local-lis/les=0/0 n=0 ec=26/26 lis/c=0/0 les/c/f=0/0/0 sis=26) [2,3,4] r=2 lpr=26 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:07:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:07:20 localhost podman[56447]: 2025-11-28 08:07:20.97263299 +0000 UTC m=+0.078729099 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, distribution-scope=public, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, release=1761123044, container_name=metrics_qdr) Nov 28 03:07:21 localhost podman[56447]: 2025-11-28 08:07:21.163528355 +0000 UTC m=+0.269624444 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:07:21 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:07:29 localhost ceph-osd[33334]: osd.4 pg_epoch: 32 pg[6.0( empty local-lis/les=0/0 n=0 ec=32/32 lis/c=0/0 les/c/f=0/0/0 sis=32) [0,4,2] r=1 lpr=32 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:07:29 localhost ceph-osd[32393]: osd.1 pg_epoch: 33 pg[7.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1,5,3] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:07:30 localhost ceph-osd[32393]: osd.1 pg_epoch: 34 pg[7.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1,5,3] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:07:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:07:51 localhost systemd[1]: tmp-crun.6HtFSN.mount: Deactivated successfully. Nov 28 03:07:51 localhost podman[56523]: 2025-11-28 08:07:51.974249246 +0000 UTC m=+0.081695858 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, release=1761123044, architecture=x86_64) Nov 28 03:07:52 localhost podman[56523]: 2025-11-28 08:07:52.184854429 +0000 UTC m=+0.292301051 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_id=tripleo_step1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Nov 28 03:07:52 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:07:59 localhost ceph-osd[33334]: osd.4 pg_epoch: 38 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=13.071694374s) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active pruub 1173.604248047s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:07:59 localhost ceph-osd[33334]: osd.4 pg_epoch: 38 pg[2.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38 pruub=13.071694374s) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown pruub 1173.604248047s@ mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.19( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.5( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.8( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.3( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.4( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.2( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.6( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.7( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.9( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.a( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.b( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.c( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.d( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.e( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.f( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.10( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.11( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.12( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.14( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.13( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.15( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.16( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.18( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.17( empty local-lis/les=20/21 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.0( empty local-lis/les=38/39 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:00 localhost ceph-osd[33334]: osd.4 pg_epoch: 39 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=20/20 les/c/f=21/21/0 sis=38) [4,5,3] r=0 lpr=38 pi=[20,38)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:01 localhost ceph-osd[33334]: osd.4 pg_epoch: 40 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=15.074197769s) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 active pruub 1177.627685547s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,5], acting [3,4,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:01 localhost ceph-osd[33334]: osd.4 pg_epoch: 40 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=13.075235367s) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 active pruub 1175.630004883s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,0], acting [5,4,0] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:01 localhost ceph-osd[33334]: osd.4 pg_epoch: 40 pg[4.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=40 pruub=15.072606087s) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1177.627685547s@ mbc={}] state: transitioning to Stray Nov 28 03:08:01 localhost ceph-osd[33334]: osd.4 pg_epoch: 40 pg[3.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=40 pruub=13.072304726s) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1175.630004883s@ mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.19( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.18( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.18( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.4( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.3( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.2( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.5( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.4( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.3( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.5( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.6( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.7( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.7( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.6( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.8( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.f( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.c( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.d( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.b( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.8( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.9( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.16( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.2( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.10( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.11( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.17( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.13( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.a( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.15( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.12( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.14( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.15( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.12( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.14( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.13( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.17( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.16( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.10( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.11( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.19( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1a( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[4.1e( empty local-lis/les=24/25 n=0 ec=40/24 lis/c=24/24 les/c/f=25/25/0 sis=40) [3,4,5] r=1 lpr=40 pi=[24,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1d( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1b( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1f( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1e( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.1c( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: osd.4 pg_epoch: 41 pg[3.9( empty local-lis/les=22/23 n=0 ec=40/22 lis/c=22/22 les/c/f=23/23/0 sis=40) [5,4,0] r=1 lpr=40 pi=[22,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:02 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.0 scrub starts Nov 28 03:08:02 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.0 scrub ok Nov 28 03:08:02 localhost python3[56567]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:03 localhost ceph-osd[33334]: osd.4 pg_epoch: 42 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42 pruub=15.902339935s) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 active pruub 1180.485839844s@ mbc={}] start_peering_interval up [2,3,4] -> [2,3,4], acting [2,3,4] -> [2,3,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:03 localhost ceph-osd[33334]: osd.4 pg_epoch: 42 pg[6.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=42 pruub=13.960138321s) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 active pruub 1178.544433594s@ mbc={}] start_peering_interval up [0,4,2] -> [0,4,2], acting [0,4,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:03 localhost ceph-osd[33334]: osd.4 pg_epoch: 42 pg[6.0( empty local-lis/les=32/33 n=0 ec=32/32 lis/c=32/32 les/c/f=33/33/0 sis=42 pruub=13.955951691s) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1178.544433594s@ mbc={}] state: transitioning to Stray Nov 28 03:08:03 localhost ceph-osd[33334]: osd.4 pg_epoch: 42 pg[5.0( empty local-lis/les=26/27 n=0 ec=26/26 lis/c=26/26 les/c/f=27/27/0 sis=42 pruub=15.897210121s) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1180.485839844s@ mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.10( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.13( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.11( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1c( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.12( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.12( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.13( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.10( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1f( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.17( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.15( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.16( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.14( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.16( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.11( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.17( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.14( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.b( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.8( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.15( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.a( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.9( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.a( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.8( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.9( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.c( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.f( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.e( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.7( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.d( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.b( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.4( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.6( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.5( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.e( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.2( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.3( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.3( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.5( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.4( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.6( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.2( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.d( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.c( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1e( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1d( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.f( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1d( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1e( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1c( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.7( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.18( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1b( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.1a( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1f( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.19( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.19( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1a( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[6.1b( empty local-lis/les=32/33 n=0 ec=42/32 lis/c=32/32 les/c/f=33/33/0 sis=42) [0,4,2] r=1 lpr=42 pi=[32,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 43 pg[5.18( empty local-lis/les=26/27 n=0 ec=42/26 lis/c=26/26 les/c/f=27/27/0 sis=42) [2,3,4] r=2 lpr=42 pi=[26,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state: transitioning to Stray Nov 28 03:08:04 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.18 scrub starts Nov 28 03:08:04 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.18 scrub ok Nov 28 03:08:04 localhost python3[56583]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:05 localhost ceph-osd[32393]: osd.1 pg_epoch: 44 pg[7.0( v 36'39 (0'0,36'39] local-lis/les=33/34 n=22 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=44 pruub=12.875659943s) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 36'38 mlcod 36'38 active pruub 1184.027221680s@ mbc={}] start_peering_interval up [1,5,3] -> [1,5,3], acting [1,5,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:05 localhost ceph-osd[32393]: osd.1 pg_epoch: 44 pg[7.0( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=44 pruub=12.875659943s) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 36'38 mlcod 0'0 unknown pruub 1184.027221680s@ mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.f( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.e( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.c( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.d( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.9( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=33/34 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.4( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.8( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.7( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.2( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.a( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=33/34 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.0( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 36'38 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.2( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.8( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.4( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[32393]: osd.1 pg_epoch: 45 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=33/33 les/c/f=34/34/0 sis=44) [1,5,3] r=0 lpr=44 pi=[33,44)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:06 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.14 scrub starts Nov 28 03:08:06 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.14 scrub ok Nov 28 03:08:06 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.0 scrub starts Nov 28 03:08:06 localhost python3[56599]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:10 localhost python3[56647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:10 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.d deep-scrub starts Nov 28 03:08:10 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.d deep-scrub ok Nov 28 03:08:10 localhost python3[56690]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317289.9633682-92600-5183222995555/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=98ffd20e3b9db1cae39a950d9da1f69e92796658 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:12 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.17 scrub starts Nov 28 03:08:12 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.17 scrub ok Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.11( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.16( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.10( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,5,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977840424s) [0,1,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792602539s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,2], acting [5,4,0] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.13( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.830454826s) [4,5,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645263672s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,3], acting [2,3,4] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.14( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,2,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.19( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977735519s) [0,1,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792602539s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.830454826s) [4,5,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.645263672s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.10( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831009865s) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.646118164s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,0], acting [2,3,4] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.10( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831009865s) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.646118164s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.16( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825892448s) [3,2,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641235352s@ mbc={}] start_peering_interval up [0,4,2] -> [3,2,1], acting [0,4,2] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.13( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825799942s) [3,2,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641235352s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976946831s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792480469s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976910591s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792480469s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976987839s) [2,1,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792724609s@ mbc={}] start_peering_interval up [5,4,0] -> [2,1,0], acting [5,4,0] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976950645s) [2,1,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792724609s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.980078697s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.795776367s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825941086s) [0,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641723633s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.980021477s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.795776367s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.10( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825866699s) [0,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641723633s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825681686s) [0,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641601562s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,5], acting [0,4,2] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.828049660s) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643920898s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.833456039s) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649414062s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.d( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.16( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825644493s) [0,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641601562s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.15( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.833456039s) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.649414062s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.15( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.828049660s) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.643920898s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.828238487s) [2,0,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644287109s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,1], acting [2,3,4] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.8( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.828203201s) [2,0,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.644287109s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977394104s) [2,4,0] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793579102s@ mbc={}] start_peering_interval up [5,4,0] -> [2,4,0], acting [5,4,0] -> [2,4,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977326393s) [2,4,0] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793579102s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832056046s) [4,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.648559570s@ mbc={}] start_peering_interval up [0,4,2] -> [4,0,2], acting [0,4,2] -> [4,0,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.a( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832056046s) [4,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.648559570s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831913948s) [0,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.648559570s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977123260s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793701172s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.9( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831879616s) [0,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.648559570s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977123260s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.793701172s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827555656s) [2,4,0] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644287109s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,0], acting [2,3,4] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.1c( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827482224s) [2,4,0] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.644287109s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832635880s) [4,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649536133s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.e( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832635880s) [4,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.649536133s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825080872s) [2,0,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.642089844s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,4], acting [2,3,4] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.826766968s) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643798828s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.19( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.7( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.826766968s) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.643798828s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825041771s) [2,0,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.642089844s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977373123s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794067383s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976415634s) [0,4,2] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793701172s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,2], acting [5,4,0] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.1b( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976384163s) [0,4,2] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793701172s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.977373123s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.794067383s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832133293s) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649658203s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.5( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832133293s) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.649658203s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976243973s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794067383s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.976193428s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794067383s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823978424s) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641967773s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832106590s) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650024414s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823978424s) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.641967773s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825306892s) [0,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643432617s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.3( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832106590s) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.650024414s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.975559235s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793701172s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.3( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825263977s) [0,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.643432617s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.975559235s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.793701172s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.975432396s) [4,0,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793823242s@ mbc={}] start_peering_interval up [5,4,0] -> [4,0,5], acting [5,4,0] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832025528s) [4,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650390625s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.824101448s) [0,4,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.642456055s@ mbc={}] start_peering_interval up [2,3,4] -> [0,4,5], acting [2,3,4] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.975432396s) [4,0,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.793823242s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.7( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.832025528s) [4,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.650390625s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.5( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.824048996s) [0,4,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.642456055s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825001717s) [4,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643676758s@ mbc={}] start_peering_interval up [2,3,4] -> [4,0,2], acting [2,3,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.9( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.2( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825001717s) [4,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.643676758s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831455231s) [2,1,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650146484s@ mbc={}] start_peering_interval up [0,4,2] -> [2,1,3], acting [0,4,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831419945s) [2,1,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650146484s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825596809s) [4,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644287109s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.f( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825596809s) [4,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.644287109s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827201843s) [2,4,3] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645996094s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,3], acting [2,3,4] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.826667786s) [0,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645507812s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,5], acting [2,3,4] -> [0,1,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831649780s) [0,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650634766s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,2], acting [0,4,2] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827166557s) [2,4,3] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645996094s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.19( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.826630592s) [0,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645507812s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.18( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831589699s) [0,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650634766s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831677437s) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650756836s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.975507736s) [0,1,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794677734s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,5], acting [5,4,0] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.975453377s) [0,1,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794677734s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.826570511s) [4,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645751953s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.18( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.826570511s) [4,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.645751953s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831312180s) [5,1,0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650756836s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,0], acting [0,4,2] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1b( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831275940s) [5,1,0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650756836s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.822189331s) [3,5,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641845703s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,4], acting [0,4,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.030885696s) [0,4,2] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.11( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.822154045s) [3,5,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641845703s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.030819893s) [0,4,2] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968659401s) [2,3,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.788574219s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,1], acting [3,4,5] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.19( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968625069s) [2,3,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.788574219s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.974594116s) [3,2,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794677734s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,4], acting [5,4,0] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.030447006s) [3,5,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.974533081s) [3,2,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794677734s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.030405998s) [3,5,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1a( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.831677437s) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.650756836s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967097282s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.787475586s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.18( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967097282s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.787475586s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.853073120s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.204345703s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.853013039s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.204345703s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.856533051s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.208007812s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.856444359s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.208007812s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.851760864s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.203857422s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.851696968s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.203857422s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.852090836s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.204223633s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.857297897s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.208618164s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.855573654s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.208129883s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.855964661s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.208618164s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.17( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,5,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.851481438s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.204223633s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.855342865s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.208129883s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.850686073s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.204101562s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.850599289s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.204101562s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.830129623s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650634766s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.19( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.830075264s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650634766s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.854302406s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1188.208007812s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.16( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.974110603s) [1,3,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794799805s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46 pruub=8.853870392s) [4,2,3] r=-1 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1188.208007812s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.029905319s) [2,4,0] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1c( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.974070549s) [1,3,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794799805s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.029850960s) [2,4,0] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968106270s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.788940430s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.029664040s) [2,1,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,0,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.029624939s) [2,1,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968106270s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.788940430s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968004227s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.789306641s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968004227s) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.789306641s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.5( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.973605156s) [5,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794555664s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.2( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,0,2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.8( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.1a( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,5,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.829088211s) [3,5,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650634766s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.973084450s) [5,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794555664s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823925018s) [1,0,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645507812s@ mbc={}] start_peering_interval up [2,3,4] -> [1,0,2], acting [2,3,4] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1f( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.829054832s) [3,5,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650634766s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823860168s) [1,0,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645507812s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.971156120s) [5,3,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792968750s@ mbc={}] start_peering_interval up [5,4,0] -> [5,3,4], acting [5,4,0] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.028826714s) [5,4,3] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1a( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.971123695s) [5,3,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792968750s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.028784752s) [5,4,3] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968725204s) [2,1,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.790527344s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823025703s) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645019531s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,0], acting [2,3,4] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.828413963s) [5,1,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650390625s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,3], acting [0,4,2] -> [5,1,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968675613s) [2,1,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.790527344s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823025703s) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1189.645019531s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1e( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.828380585s) [5,1,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650390625s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967222214s) [5,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.789428711s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967191696s) [5,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.789428711s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.027956009s) [1,5,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850219727s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.027852058s) [1,5,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850219727s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968965530s) [2,3,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791503906s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,4], acting [3,4,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.968930244s) [2,3,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791503906s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.822300911s) [3,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645019531s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,5], acting [2,3,4] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1d( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.822244644s) [3,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645019531s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827562332s) [3,5,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650390625s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.821415901s) [0,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644165039s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.1e( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.821235657s) [0,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.644165039s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.969354630s) [3,2,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792480469s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.027047157s) [3,4,2] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850097656s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.969314575s) [3,2,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792480469s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1d( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827419281s) [3,5,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650390625s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.19( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026980400s) [3,4,2] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850097656s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966959000s) [2,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.790039062s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966921806s) [2,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.790039062s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827233315s) [3,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650512695s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.c( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.827199936s) [3,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650512695s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967514992s) [4,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.790893555s@ mbc={}] start_peering_interval up [3,4,5] -> [4,5,0], acting [3,4,5] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.970689774s) [5,1,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794067383s@ mbc={}] start_peering_interval up [5,4,0] -> [5,1,3], acting [5,4,0] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967514992s) [4,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.790893555s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026292801s) [1,2,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849853516s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.970632553s) [5,1,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794067383s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.970430374s) [3,2,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793945312s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.8( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026234627s) [1,2,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849853516s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026973724s) [2,0,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966407776s) [2,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.790039062s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.4( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.970390320s) [3,2,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793945312s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.3( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966372490s) [2,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.790039062s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.970038414s) [3,5,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793701172s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,1], acting [5,4,0] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.5( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026905060s) [2,0,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.2( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.969979286s) [3,5,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793701172s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.025691032s) [4,3,5] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849487305s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.3( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.025691032s) [4,3,5] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1185.849487305s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967890739s) [1,5,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791870117s@ mbc={}] start_peering_interval up [3,4,5] -> [1,5,0], acting [3,4,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.5( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967849731s) [1,5,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791870117s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825885773s) [3,4,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650024414s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.6( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825831413s) [3,4,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650024414s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.818105698s) [5,3,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.642333984s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966425896s) [0,5,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.790527344s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,1], acting [3,4,5] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.4( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.818045616s) [5,3,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.642333984s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026214600s) [3,2,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850585938s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.4( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966373444s) [0,5,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.790527344s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.4( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.026171684s) [3,2,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850585938s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967329979s) [2,1,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791870117s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.2( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967271805s) [2,1,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791870117s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.025094986s) [4,2,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849853516s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966789246s) [2,1,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791503906s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,0], acting [3,4,5] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.7( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.025094986s) [4,2,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1185.849853516s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.025369644s) [1,0,2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850219727s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.966738701s) [2,1,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791503906s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.2( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.025326729s) [1,0,2] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850219727s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.969273567s) [3,5,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794067383s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,4], acting [5,4,0] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825081825s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.650146484s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.024597168s) [3,2,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849487305s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.969213486s) [3,5,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794067383s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.6( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.024558067s) [3,2,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849487305s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.024641991s) [3,5,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849853516s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.965794563s) [0,5,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791015625s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.1( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.024593353s) [3,5,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849853516s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.2( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.825022697s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.650146484s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.824617386s) [3,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649902344s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.815999985s) [3,4,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641723633s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.14( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.815927505s) [3,4,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641723633s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.4( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.824579239s) [3,1,5] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.649902344s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.817840576s) [3,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643188477s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,2], acting [2,3,4] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.964966774s) [5,3,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791259766s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,4], acting [3,4,5] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.6( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.964921951s) [5,3,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791259766s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.6( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.816976547s) [3,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.643188477s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823229790s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649658203s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967538834s) [2,0,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793945312s@ mbc={}] start_peering_interval up [5,4,0] -> [2,0,4], acting [5,4,0] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967495918s) [2,0,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793945312s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.d( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.823106766s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.649658203s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.023468971s) [3,4,5] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.850341797s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.9( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.023432732s) [3,4,5] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.850341797s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.965711594s) [3,2,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792724609s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.7( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.965720177s) [0,5,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791015625s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.f( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.965650558s) [3,2,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792724609s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.964346886s) [4,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791503906s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,2], acting [3,4,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967173576s) [3,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794433594s@ mbc={}] start_peering_interval up [5,4,0] -> [3,4,5], acting [5,4,0] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.022138596s) [2,3,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849365234s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.c( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.964346886s) [4,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.791503906s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.b( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.967098236s) [3,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794433594s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.a( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.022038460s) [2,3,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849365234s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.963990211s) [4,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791625977s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.021915436s) [5,1,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849487305s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.d( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.963990211s) [4,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.791625977s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.b( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.021776199s) [5,1,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849487305s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.816356659s) [3,4,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644287109s@ mbc={}] start_peering_interval up [2,3,4] -> [3,4,2], acting [2,3,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.c( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.816319466s) [3,4,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.644287109s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.821342468s) [1,2,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649414062s@ mbc={}] start_peering_interval up [0,4,2] -> [1,2,3], acting [0,4,2] -> [1,2,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.821489334s) [3,5,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.649658203s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.8( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.821275711s) [1,2,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.649414062s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.f( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.821254730s) [3,5,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.649658203s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.965167999s) [1,2,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793579102s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,3], acting [5,4,0] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.d( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.965078354s) [1,2,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793579102s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.020599365s) [2,0,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849243164s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.c( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.020244598s) [2,0,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849243164s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.816335678s) [5,0,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645507812s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,4], acting [2,3,4] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.020224571s) [5,1,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849487305s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.b( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.816245079s) [5,0,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645507812s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.962602615s) [1,0,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791870117s@ mbc={}] start_peering_interval up [3,4,5] -> [1,0,2], acting [3,4,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.963671684s) [0,1,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793212891s@ mbc={}] start_peering_interval up [3,4,5] -> [0,1,5], acting [3,4,5] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.a( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.962430954s) [1,0,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791870117s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.814314842s) [0,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643798828s@ mbc={}] start_peering_interval up [2,3,4] -> [0,2,4], acting [2,3,4] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.b( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.963333130s) [0,1,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793212891s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.964268684s) [1,5,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.794433594s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,0], acting [5,4,0] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.d( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.019411087s) [5,1,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849487305s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.964202881s) [1,5,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.794433594s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.813126564s) [1,5,0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643798828s@ mbc={}] start_peering_interval up [2,3,4] -> [1,5,0], acting [2,3,4] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.9( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.813075066s) [1,5,0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.643798828s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.961013794s) [5,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791870117s@ mbc={}] start_peering_interval up [3,4,5] -> [5,4,3], acting [3,4,5] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.018561363s) [3,2,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849243164s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.e( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.018381119s) [3,2,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849243164s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.8( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.960960388s) [5,4,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791870117s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.817350388s) [3,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.648437500s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,2], acting [0,4,2] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.b( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.817280769s) [3,1,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.648437500s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.017602921s) [2,4,0] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.848876953s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.f( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.017548561s) [2,4,0] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.848876953s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.960121155s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791748047s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.16( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.960069656s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791748047s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.812532425s) [3,5,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644287109s@ mbc={}] start_peering_interval up [2,3,4] -> [3,5,4], acting [2,3,4] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.17( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.812440872s) [3,5,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.644287109s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.960086823s) [5,0,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791992188s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.017263412s) [2,0,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.849243164s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.10( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.017198563s) [2,0,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.849243164s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.9( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.959957123s) [5,0,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791992188s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.961544037s) [1,5,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793701172s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,3], acting [5,4,0] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.10( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.961502075s) [1,5,3] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793701172s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.960978508s) [3,1,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.793212891s@ mbc={}] start_peering_interval up [3,4,5] -> [3,1,5], acting [3,4,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.016392708s) [4,3,2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.848754883s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.11( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.016392708s) [4,3,2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1185.848754883s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.811491013s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.643920898s@ mbc={}] start_peering_interval up [2,3,4] -> [1,3,2], acting [2,3,4] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.16( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.811398506s) [1,3,2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.643920898s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.17( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.960863113s) [3,1,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.793212891s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.958947182s) [5,0,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791992188s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.015252113s) [5,3,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.848388672s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.808784485s) [5,0,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641967773s@ mbc={}] start_peering_interval up [0,4,2] -> [5,0,1], acting [0,4,2] -> [5,0,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.12( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.015178680s) [5,3,1] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.848388672s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.14( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.958866119s) [5,0,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791992188s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.013576508s) [2,4,3] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.848388672s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,3], acting [4,5,3] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.a( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.813790321s) [0,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.643798828s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.17( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.808738708s) [5,0,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641967773s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.957107544s) [5,3,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791992188s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,1], acting [3,4,5] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.13( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.013529778s) [2,4,3] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.848388672s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.15( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.957038879s) [5,3,1] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791992188s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.810080528s) [3,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645263672s@ mbc={}] start_peering_interval up [2,3,4] -> [3,2,4], acting [2,3,4] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.14( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.810009956s) [3,2,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645263672s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.008739471s) [4,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.844116211s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956622124s) [0,5,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.791992188s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.14( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.008739471s) [4,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1185.844116211s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.12( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956571579s) [0,5,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.791992188s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.810297012s) [5,0,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645751953s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,1], acting [2,3,4] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.13( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.810237885s) [5,0,1] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645751953s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.957045555s) [1,3,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792724609s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956451416s) [4,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792114258s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.13( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956451416s) [4,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown pruub 1187.792114258s@ mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956871033s) [1,2,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792724609s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,0], acting [5,4,0] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.14( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956832886s) [1,2,0] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792724609s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.809107780s) [5,1,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.645019531s@ mbc={}] start_peering_interval up [2,3,4] -> [5,1,3], acting [2,3,4] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.12( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.809051514s) [5,1,3] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.645019531s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.805438042s) [5,4,0] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641479492s@ mbc={}] start_peering_interval up [0,4,2] -> [5,4,0], acting [0,4,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.012266159s) [1,2,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.848266602s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.12( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.805400848s) [5,4,0] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641479492s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.16( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.012229919s) [1,2,0] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.848266602s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.955913544s) [3,2,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792114258s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956340790s) [1,3,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792602539s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,5], acting [5,4,0] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.808458328s) [1,2,0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.644775391s@ mbc={}] start_peering_interval up [2,3,4] -> [1,2,0], acting [2,3,4] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.10( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.955863953s) [3,2,4] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792114258s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956303596s) [1,3,5] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792602539s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[5.11( empty local-lis/les=42/43 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.808383942s) [1,2,0] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.644775391s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.011260033s) [1,5,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.847778320s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.17( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.011223793s) [1,5,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.847778320s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.804635048s) [5,3,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active pruub 1189.641357422s@ mbc={}] start_peering_interval up [0,4,2] -> [5,3,4], acting [0,4,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.955525398s) [3,4,2] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792236328s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,2], acting [3,4,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[6.1c( empty local-lis/les=42/43 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=14.804577827s) [5,3,4] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1189.641357422s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.007039070s) [5,1,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.843750000s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.11( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.955476761s) [3,4,2] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792236328s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.18( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.006999969s) [5,1,3] r=-1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.843750000s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.955180168s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active pruub 1187.792114258s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[4.1e( empty local-lis/les=40/41 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.955126762s) [0,4,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792114258s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[3.13( empty local-lis/les=40/41 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46 pruub=12.956981659s) [1,3,2] r=-1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1187.792724609s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.010449409s) [5,0,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active pruub 1185.848022461s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[2.15( empty local-lis/les=38/39 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46 pruub=11.010396957s) [5,0,4] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1185.848022461s@ mbc={}] state: transitioning to Stray Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.b( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.9( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.f( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.5( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.3( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.7( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.1( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[33334]: osd.4 pg_epoch: 46 pg[7.d( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:13 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.2 scrub starts Nov 28 03:08:13 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.2 scrub ok Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.15( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [5,3,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [5,0,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [5,1,3] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.1b( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [5,1,0] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [5,0,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.d( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [5,1,3] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [5,1,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [5,1,0] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [5,3,1] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [5,0,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.13( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [5,0,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.12( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [5,1,3] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.17( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [3,1,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [5,1,3] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.4( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [3,2,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.1f( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [0,1,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.2( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [3,5,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.4( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [0,5,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.18( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [3,2,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.b( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [0,1,5] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.4( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [3,2,1] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.1e( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [0,1,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.16( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [0,1,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.19( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [0,1,2] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.3( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [0,1,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.1f( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,5,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.18( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [0,1,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.19( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [0,1,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.1d( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.1d( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,5,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.c( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.6( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,1,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.4( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,1,5] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[2.11( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [4,3,2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.d( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.c( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[2.7( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [4,2,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.f( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,5,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[2.14( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [4,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[3.c( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[3.3( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,0,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.b( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,1,2] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.e( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[3.a( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[3.5( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[2.3( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [4,3,5] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.18( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.1b( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.1a( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.13( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [3,2,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.1d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [2,1,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.f( v 36'39 lc 36'1 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[4.13( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [4,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.1( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.1a( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.18( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.f( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.2( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.1c( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.5( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.e( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.19( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [2,3,1] r=2 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.a( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.7( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [2,1,0] r=1 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.c( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [2,0,1] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.f( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [2,0,1] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.2( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [2,1,3] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.7( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[4.5( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,5,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.3( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[2.a( empty local-lis/les=0/0 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [2,3,1] r=2 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[4.1( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [2,1,0] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.5( v 36'39 lc 36'11 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.7( v 36'39 lc 36'18 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.1( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[6.15( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[7.d( v 36'39 lc 36'13 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=46) [4,2,3] r=0 lpr=46 pi=[44,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=0/0 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [2,1,0] r=1 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[6.1( empty local-lis/les=0/0 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [2,1,3] r=1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 46 pg[5.8( empty local-lis/les=0/0 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [2,0,1] r=2 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.1c( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[2.2( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,0,2] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[4.a( empty local-lis/les=46/47 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,0,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.d( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,2,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[2.8( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[5.16( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.13( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,3,2] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[2.1a( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,5,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,2,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.14( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,2,0] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[2.16( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,2,0] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[5.11( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,2,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.15( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,3,5] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.1f( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,5,3] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[33334]: osd.4 pg_epoch: 47 pg[5.10( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [4,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.10( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,5,3] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[2.17( empty local-lis/les=46/47 n=0 ec=38/20 lis/c=38/38 les/c/f=39/39/0 sis=46) [1,5,3] r=0 lpr=46 pi=[38,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[3.16( empty local-lis/les=46/47 n=0 ec=40/22 lis/c=40/40 les/c/f=41/41/0 sis=46) [1,3,5] r=0 lpr=46 pi=[40,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[5.9( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,5,0] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[5.1b( empty local-lis/les=46/47 n=0 ec=42/26 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,0,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:14 localhost ceph-osd[32393]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=46/47 n=0 ec=42/32 lis/c=42/42 les/c/f=43/43/0 sis=46) [1,3,2] r=0 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state: react AllReplicasActivated Activating complete Nov 28 03:08:15 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.c scrub starts Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.2( v 36'39 (0'0,36'39] local-lis/les=44/45 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.800663948s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1196.204589844s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.2( v 36'39 (0'0,36'39] local-lis/les=44/45 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.800559998s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.204589844s@ mbc={}] state: transitioning to Stray Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.804156303s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1196.208251953s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.804106712s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.208251953s@ mbc={}] state: transitioning to Stray Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.804538727s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1196.209350586s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.803477287s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1196.208496094s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=44/45 n=2 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.804100990s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.209350586s@ mbc={}] state: transitioning to Stray Nov 28 03:08:15 localhost ceph-osd[32393]: osd.1 pg_epoch: 48 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=48 pruub=14.803343773s) [3,5,1] r=2 lpr=48 pi=[44,48)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1196.208496094s@ mbc={}] state: transitioning to Stray Nov 28 03:08:15 localhost python3[56753]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:16 localhost python3[56796]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317295.4785428-92600-114468586821601/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=880d8421ed22fd6e089f5c7c842f51482074b0c0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:20 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts Nov 28 03:08:21 localhost python3[56858]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:21 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.4 scrub starts Nov 28 03:08:21 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.4 scrub ok Nov 28 03:08:21 localhost python3[56901]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317300.9094434-92600-79932590728993/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=3f1634d98b90f8c800fba4d3a33fb1546a043fff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:08:22 localhost podman[56916]: 2025-11-28 08:08:22.977933337 +0000 UTC m=+0.084439330 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64) Nov 28 03:08:23 localhost podman[56916]: 2025-11-28 08:08:23.155501304 +0000 UTC m=+0.262007337 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp17/openstack-qdrouterd, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1761123044) Nov 28 03:08:23 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:08:23 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=50 pruub=14.779549599s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 active pruub 1199.847290039s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=50 pruub=14.779484749s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1199.847290039s@ mbc={}] state: transitioning to Stray Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=50 pruub=14.774587631s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 active pruub 1199.842651367s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=50 pruub=14.774202347s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 active pruub 1199.842407227s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=50 pruub=14.774164200s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1199.842407227s@ mbc={}] state: transitioning to Stray Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.3( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=50 pruub=14.774456978s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1199.842651367s@ mbc={}] state: transitioning to Stray Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.769097328s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 active pruub 1199.837524414s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:23 localhost ceph-osd[33334]: osd.4 pg_epoch: 50 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.769072533s) [3,4,2] r=1 lpr=50 pi=[46,50)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1199.837524414s@ mbc={}] state: transitioning to Stray Nov 28 03:08:25 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.10 scrub starts Nov 28 03:08:25 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.10 scrub ok Nov 28 03:08:25 localhost ceph-osd[32393]: osd.1 pg_epoch: 52 pg[7.4( v 36'39 (0'0,36'39] local-lis/les=44/45 n=4 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.558876038s) [0,1,2] r=1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1204.208496094s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:25 localhost ceph-osd[32393]: osd.1 pg_epoch: 52 pg[7.4( v 36'39 (0'0,36'39] local-lis/les=44/45 n=4 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.558786392s) [0,1,2] r=1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.208496094s@ mbc={}] state: transitioning to Stray Nov 28 03:08:25 localhost ceph-osd[32393]: osd.1 pg_epoch: 52 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.559266090s) [0,1,2] r=1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1204.208618164s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:25 localhost ceph-osd[32393]: osd.1 pg_epoch: 52 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=12.558312416s) [0,1,2] r=1 lpr=52 pi=[44,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1204.208618164s@ mbc={}] state: transitioning to Stray Nov 28 03:08:26 localhost python3[56993]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:26 localhost python3[57038]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317305.8341537-93172-12382793399610/source _original_basename=tmpj2x10uya follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:27 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.16 scrub starts Nov 28 03:08:27 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.16 scrub ok Nov 28 03:08:27 localhost python3[57100]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:28 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.d scrub starts Nov 28 03:08:28 localhost python3[57143]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317307.5709486-93261-107695719738191/source _original_basename=tmpob9jdbub follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:29 localhost python3[57173]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None Nov 28 03:08:29 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.8 deep-scrub starts Nov 28 03:08:29 localhost python3[57191]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:08:31 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.2 scrub starts Nov 28 03:08:31 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.2 scrub ok Nov 28 03:08:31 localhost ansible-async_wrapper.py[57363]: Invoked with 289175210079 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317310.6999605-93349-104102807179937/AnsiballZ_command.py _ Nov 28 03:08:31 localhost ansible-async_wrapper.py[57366]: Starting module and watcher Nov 28 03:08:31 localhost ansible-async_wrapper.py[57366]: Start watching 57367 (3600) Nov 28 03:08:31 localhost ansible-async_wrapper.py[57367]: Start module (57367) Nov 28 03:08:31 localhost ansible-async_wrapper.py[57363]: Return async_wrapper task started. Nov 28 03:08:31 localhost python3[57387]: ansible-ansible.legacy.async_status Invoked with jid=289175210079.57363 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:08:32 localhost ceph-osd[33334]: osd.4 pg_epoch: 54 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=54 pruub=13.456579208s) [2,0,4] r=2 lpr=54 pi=[46,54)/1 crt=36'39 mlcod 0'0 active pruub 1207.847656250s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:32 localhost ceph-osd[33334]: osd.4 pg_epoch: 54 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=54 pruub=13.456820488s) [2,0,4] r=2 lpr=54 pi=[46,54)/1 crt=36'39 mlcod 0'0 active pruub 1207.847900391s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:32 localhost ceph-osd[33334]: osd.4 pg_epoch: 54 pg[7.5( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=54 pruub=13.456375122s) [2,0,4] r=2 lpr=54 pi=[46,54)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1207.847656250s@ mbc={}] state: transitioning to Stray Nov 28 03:08:32 localhost ceph-osd[33334]: osd.4 pg_epoch: 54 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=44/33 lis/c=46/46 les/c/f=47/49/0 sis=54 pruub=13.456663132s) [2,0,4] r=2 lpr=54 pi=[46,54)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1207.847900391s@ mbc={}] state: transitioning to Stray Nov 28 03:08:33 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.19 scrub starts Nov 28 03:08:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:08:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 4047 writes, 19K keys, 4047 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4047 writes, 325 syncs, 12.45 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 642 writes, 2559 keys, 642 commit groups, 1.0 writes per commit group, ingest: 1.32 MB, 0.00 MB/s#012Interval WAL: 642 writes, 119 syncs, 5.39 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt Nov 28 03:08:34 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.1a scrub starts Nov 28 03:08:34 localhost ceph-osd[32393]: osd.1 pg_epoch: 56 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=13.463187218s) [0,4,5] r=-1 lpr=56 pi=[48,56)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1214.438842773s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:34 localhost ceph-osd[32393]: osd.1 pg_epoch: 56 pg[7.6( v 36'39 (0'0,36'39] local-lis/les=48/49 n=2 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=13.463090897s) [0,4,5] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1214.438842773s@ mbc={}] state: transitioning to Stray Nov 28 03:08:34 localhost ceph-osd[32393]: osd.1 pg_epoch: 56 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=13.463495255s) [0,4,5] r=-1 lpr=56 pi=[48,56)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1214.439086914s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:34 localhost ceph-osd[32393]: osd.1 pg_epoch: 56 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=56 pruub=13.463070869s) [0,4,5] r=-1 lpr=56 pi=[48,56)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1214.439086914s@ mbc={}] state: transitioning to Stray Nov 28 03:08:35 localhost puppet-user[57386]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:08:35 localhost puppet-user[57386]: (file: /etc/puppet/hiera.yaml) Nov 28 03:08:35 localhost puppet-user[57386]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:08:35 localhost puppet-user[57386]: (file & line not available) Nov 28 03:08:35 localhost puppet-user[57386]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:08:35 localhost puppet-user[57386]: (file & line not available) Nov 28 03:08:35 localhost puppet-user[57386]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 28 03:08:35 localhost puppet-user[57386]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 28 03:08:35 localhost puppet-user[57386]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.15 seconds Nov 28 03:08:35 localhost puppet-user[57386]: Notice: Applied catalog in 0.04 seconds Nov 28 03:08:35 localhost puppet-user[57386]: Application: Nov 28 03:08:35 localhost puppet-user[57386]: Initial environment: production Nov 28 03:08:35 localhost puppet-user[57386]: Converged environment: production Nov 28 03:08:35 localhost puppet-user[57386]: Run mode: user Nov 28 03:08:35 localhost puppet-user[57386]: Changes: Nov 28 03:08:35 localhost puppet-user[57386]: Events: Nov 28 03:08:35 localhost puppet-user[57386]: Resources: Nov 28 03:08:35 localhost puppet-user[57386]: Total: 10 Nov 28 03:08:35 localhost puppet-user[57386]: Time: Nov 28 03:08:35 localhost puppet-user[57386]: Schedule: 0.00 Nov 28 03:08:35 localhost puppet-user[57386]: File: 0.00 Nov 28 03:08:35 localhost puppet-user[57386]: Exec: 0.01 Nov 28 03:08:35 localhost puppet-user[57386]: Augeas: 0.01 Nov 28 03:08:35 localhost puppet-user[57386]: Transaction evaluation: 0.03 Nov 28 03:08:35 localhost puppet-user[57386]: Catalog application: 0.04 Nov 28 03:08:35 localhost puppet-user[57386]: Config retrieval: 0.19 Nov 28 03:08:35 localhost puppet-user[57386]: Last run: 1764317315 Nov 28 03:08:35 localhost puppet-user[57386]: Filebucket: 0.00 Nov 28 03:08:35 localhost puppet-user[57386]: Total: 0.04 Nov 28 03:08:35 localhost puppet-user[57386]: Version: Nov 28 03:08:35 localhost puppet-user[57386]: Config: 1764317314 Nov 28 03:08:35 localhost puppet-user[57386]: Puppet: 7.10.0 Nov 28 03:08:35 localhost ansible-async_wrapper.py[57367]: Module complete (57367) Nov 28 03:08:35 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.a scrub starts Nov 28 03:08:35 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.a scrub ok Nov 28 03:08:36 localhost ceph-osd[33334]: osd.4 pg_epoch: 56 pg[7.e( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=56) [0,4,5] r=1 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:36 localhost ceph-osd[33334]: osd.4 pg_epoch: 56 pg[7.6( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=56) [0,4,5] r=1 lpr=56 pi=[48,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:36 localhost ansible-async_wrapper.py[57366]: Done in kid B. Nov 28 03:08:37 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.5 scrub starts Nov 28 03:08:37 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.5 scrub ok Nov 28 03:08:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:08:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 4930 writes, 22K keys, 4930 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4930 writes, 382 syncs, 12.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1683 writes, 6347 keys, 1683 commit groups, 1.0 writes per commit group, ingest: 2.34 MB, 0.00 MB/s#012Interval WAL: 1683 writes, 243 syncs, 6.93 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 m Nov 28 03:08:40 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.1a deep-scrub starts Nov 28 03:08:40 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.13 scrub starts Nov 28 03:08:40 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.13 scrub ok Nov 28 03:08:41 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.15 scrub starts Nov 28 03:08:41 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.15 scrub ok Nov 28 03:08:41 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.14 scrub starts Nov 28 03:08:41 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.14 scrub ok Nov 28 03:08:41 localhost python3[57639]: ansible-ansible.legacy.async_status Invoked with jid=289175210079.57363 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:08:42 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.3 scrub starts Nov 28 03:08:42 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.d scrub starts Nov 28 03:08:42 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.d scrub ok Nov 28 03:08:42 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.3 scrub ok Nov 28 03:08:42 localhost python3[57655]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:08:42 localhost python3[57671]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:08:43 localhost ceph-osd[32393]: osd.1 pg_epoch: 58 pg[7.7( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58) [1,5,3] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:43 localhost ceph-osd[32393]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58) [1,5,3] r=0 lpr=58 pi=[50,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:43 localhost ceph-osd[33334]: osd.4 pg_epoch: 58 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=13.545802116s) [1,5,3] r=-1 lpr=58 pi=[50,58)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1218.125732422s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:43 localhost ceph-osd[33334]: osd.4 pg_epoch: 58 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=13.545705795s) [1,5,3] r=-1 lpr=58 pi=[50,58)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1218.125732422s@ mbc={}] state: transitioning to Stray Nov 28 03:08:43 localhost ceph-osd[33334]: osd.4 pg_epoch: 58 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=13.541893005s) [1,5,3] r=-1 lpr=58 pi=[50,58)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1218.122192383s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:43 localhost ceph-osd[33334]: osd.4 pg_epoch: 58 pg[7.7( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58 pruub=13.541290283s) [1,5,3] r=-1 lpr=58 pi=[50,58)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1218.122192383s@ mbc={}] state: transitioning to Stray Nov 28 03:08:43 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.18 scrub starts Nov 28 03:08:43 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.1c scrub starts Nov 28 03:08:43 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.18 scrub ok Nov 28 03:08:43 localhost python3[57721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:43 localhost python3[57739]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp_7vl7qvz recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:08:44 localhost python3[57769]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:44 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.1b scrub starts Nov 28 03:08:44 localhost ceph-osd[32393]: osd.1 pg_epoch: 59 pg[7.f( v 36'39 lc 36'1 (0'0,36'39] local-lis/les=58/59 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58) [1,5,3] r=0 lpr=58 pi=[50,58)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:44 localhost ceph-osd[32393]: osd.1 pg_epoch: 59 pg[7.7( v 36'39 lc 36'18 (0'0,36'39] local-lis/les=58/59 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=58) [1,5,3] r=0 lpr=58 pi=[50,58)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:44 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.1b scrub ok Nov 28 03:08:45 localhost ceph-osd[32393]: osd.1 pg_epoch: 60 pg[7.8( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=9.056539536s) [3,4,5] r=-1 lpr=60 pi=[44,60)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1220.208984375s@ TIME_FOR_DEEP mbc={}] start_peering_interval up [1,5,3] -> [3,4,5], acting [1,5,3] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:45 localhost ceph-osd[32393]: osd.1 pg_epoch: 60 pg[7.8( v 36'39 (0'0,36'39] local-lis/les=44/45 n=1 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=60 pruub=9.056453705s) [3,4,5] r=-1 lpr=60 pi=[44,60)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1220.208984375s@ TIME_FOR_DEEP mbc={}] state: transitioning to Stray Nov 28 03:08:45 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.9 deep-scrub starts Nov 28 03:08:45 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.9 deep-scrub ok Nov 28 03:08:46 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.16 scrub starts Nov 28 03:08:46 localhost ceph-osd[33334]: osd.4 pg_epoch: 60 pg[7.8( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=44/44 les/c/f=45/45/0 sis=60) [3,4,5] r=1 lpr=60 pi=[44,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:46 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.16 scrub ok Nov 28 03:08:46 localhost python3[57873]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 28 03:08:47 localhost python3[57892]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:47 localhost ceph-osd[33334]: osd.4 pg_epoch: 62 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=15.171545982s) [0,2,4] r=2 lpr=62 pi=[46,62)/1 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1223.838500977s@ mbc={}] start_peering_interval up [4,2,3] -> [0,2,4], acting [4,2,3] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:47 localhost ceph-osd[33334]: osd.4 pg_epoch: 62 pg[7.9( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=44/33 lis/c=46/46 les/c/f=47/47/0 sis=62 pruub=15.171466827s) [0,2,4] r=2 lpr=62 pi=[46,62)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1223.838500977s@ mbc={}] state: transitioning to Stray Nov 28 03:08:47 localhost python3[57924]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:08:48 localhost python3[57974]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:48 localhost python3[57992]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:49 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.11 deep-scrub starts Nov 28 03:08:49 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts Nov 28 03:08:49 localhost python3[58054]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:49 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 5.11 deep-scrub ok Nov 28 03:08:49 localhost ceph-osd[32393]: osd.1 pg_epoch: 64 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=15.202980042s) [2,0,4] r=-1 lpr=64 pi=[48,64)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1230.435668945s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:49 localhost ceph-osd[32393]: osd.1 pg_epoch: 64 pg[7.a( v 36'39 (0'0,36'39] local-lis/les=48/49 n=1 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=15.202869415s) [2,0,4] r=-1 lpr=64 pi=[48,64)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1230.435668945s@ mbc={}] state: transitioning to Stray Nov 28 03:08:49 localhost python3[58072]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:50 localhost python3[58134]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:50 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 4.5 scrub starts Nov 28 03:08:50 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 4.5 scrub ok Nov 28 03:08:50 localhost ceph-osd[33334]: osd.4 pg_epoch: 64 pg[7.a( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=48/48 les/c/f=49/49/0 sis=64) [2,0,4] r=2 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:50 localhost python3[58152]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:50 localhost python3[58214]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:51 localhost python3[58232]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:51 localhost ceph-osd[33334]: osd.4 pg_epoch: 66 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=66 pruub=13.369278908s) [3,1,2] r=-1 lpr=66 pi=[50,66)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1226.125610352s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,2], acting [3,4,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:51 localhost ceph-osd[33334]: osd.4 pg_epoch: 66 pg[7.b( v 36'39 (0'0,36'39] local-lis/les=50/51 n=1 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=66 pruub=13.369094849s) [3,1,2] r=-1 lpr=66 pi=[50,66)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1226.125610352s@ mbc={}] state: transitioning to Stray Nov 28 03:08:51 localhost python3[58262]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:08:51 localhost systemd[1]: Reloading. Nov 28 03:08:51 localhost systemd-rc-local-generator[58287]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:08:51 localhost systemd-sysv-generator[58291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:08:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:08:52 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.16 scrub starts Nov 28 03:08:52 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.16 scrub ok Nov 28 03:08:52 localhost ceph-osd[32393]: osd.1 pg_epoch: 66 pg[7.b( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=50/50 les/c/f=51/51/0 sis=66) [3,1,2] r=1 lpr=66 pi=[50,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:08:52 localhost python3[58348]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:52 localhost python3[58366]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:53 localhost python3[58428]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:08:53 localhost ceph-osd[32393]: osd.1 pg_epoch: 68 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=44/33 lis/c=52/52 les/c/f=53/53/0 sis=68 pruub=13.951500893s) [1,3,2] r=0 lpr=68 pi=[52,68)/1 luod=0'0 crt=36'39 lcod 0'0 mlcod 0'0 active pruub 1233.277465820s@ mbc={}] start_peering_interval up [0,1,2] -> [1,3,2], acting [0,1,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:53 localhost ceph-osd[32393]: osd.1 pg_epoch: 68 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=44/33 lis/c=52/52 les/c/f=53/53/0 sis=68 pruub=13.951500893s) [1,3,2] r=0 lpr=68 pi=[52,68)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown pruub 1233.277465820s@ mbc={}] state: transitioning to Primary Nov 28 03:08:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:08:53 localhost systemd[1]: tmp-crun.68LuOY.mount: Deactivated successfully. Nov 28 03:08:53 localhost podman[58446]: 2025-11-28 08:08:53.613557648 +0000 UTC m=+0.101052051 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd) Nov 28 03:08:53 localhost python3[58447]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:08:53 localhost podman[58446]: 2025-11-28 08:08:53.868594021 +0000 UTC m=+0.356088474 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, version=17.1.12) Nov 28 03:08:53 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:08:54 localhost python3[58506]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:08:54 localhost systemd[1]: Reloading. Nov 28 03:08:54 localhost systemd-rc-local-generator[58533]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:08:54 localhost systemd-sysv-generator[58537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:08:54 localhost ceph-osd[32393]: osd.1 pg_epoch: 69 pg[7.c( v 36'39 (0'0,36'39] local-lis/les=68/69 n=1 ec=44/33 lis/c=52/52 les/c/f=53/53/0 sis=68) [1,3,2] r=0 lpr=68 pi=[52,68)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:08:54 localhost systemd[1]: Starting Create netns directory... Nov 28 03:08:54 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 03:08:54 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 03:08:54 localhost systemd[1]: Finished Create netns directory. Nov 28 03:08:55 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.17 scrub starts Nov 28 03:08:55 localhost python3[58563]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 28 03:08:55 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.1b scrub starts Nov 28 03:08:55 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.17 scrub ok Nov 28 03:08:55 localhost ceph-osd[33334]: osd.4 pg_epoch: 70 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=54/55 n=2 ec=44/33 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=10.772554398s) [1,3,5] r=-1 lpr=70 pi=[54,70)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1227.644409180s@ mbc={}] start_peering_interval up [2,0,4] -> [1,3,5], acting [2,0,4] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:08:55 localhost ceph-osd[33334]: osd.4 pg_epoch: 70 pg[7.d( v 36'39 (0'0,36'39] local-lis/les=54/55 n=2 ec=44/33 lis/c=54/54 les/c/f=55/55/0 sis=70 pruub=10.772466660s) [1,3,5] r=-1 lpr=70 pi=[54,70)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1227.644409180s@ mbc={}] state: transitioning to Stray Nov 28 03:08:55 localhost ceph-osd[32393]: osd.1 pg_epoch: 70 pg[7.d( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=54/54 les/c/f=55/55/0 sis=70) [1,3,5] r=0 lpr=70 pi=[54,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Primary Nov 28 03:08:56 localhost ceph-osd[32393]: osd.1 pg_epoch: 71 pg[7.d( v 36'39 lc 36'13 (0'0,36'39] local-lis/les=70/71 n=2 ec=44/33 lis/c=54/54 les/c/f=55/55/0 sis=70) [1,3,5] r=0 lpr=70 pi=[54,70)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+3)=2}}] state: react AllReplicasActivated Activating complete Nov 28 03:08:57 localhost python3[58621]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 28 03:08:57 localhost podman[58687]: 2025-11-28 08:08:57.33739144 +0000 UTC m=+0.081405768 container create 2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, container_name=nova_compute_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step2, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true) Nov 28 03:08:57 localhost systemd[1]: Started libpod-conmon-2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b.scope. Nov 28 03:08:57 localhost systemd[1]: Started libcrun container. Nov 28 03:08:57 localhost podman[58687]: 2025-11-28 08:08:57.290793441 +0000 UTC m=+0.034807849 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:08:57 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.1f scrub starts Nov 28 03:08:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ffb2916f839ff1700adaff5cb29e97302d49ba8ff980d3124f389d659473a3/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:08:57 localhost podman[58687]: 2025-11-28 08:08:57.404510308 +0000 UTC m=+0.148524656 container init 2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z) Nov 28 03:08:57 localhost podman[58710]: 2025-11-28 08:08:57.414477305 +0000 UTC m=+0.093908802 container create 8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:08:57 localhost systemd[1]: libpod-2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b.scope: Deactivated successfully. Nov 28 03:08:57 localhost systemd[1]: Started libpod-conmon-8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496.scope. Nov 28 03:08:57 localhost systemd[1]: Started libcrun container. Nov 28 03:08:57 localhost podman[58710]: 2025-11-28 08:08:57.364482031 +0000 UTC m=+0.043913558 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:08:57 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0818f18800ef844a6a48c8a7ece9ae523d5aa6f809095a5eb180d408e8d636c4/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Nov 28 03:08:57 localhost podman[58687]: 2025-11-28 08:08:57.465885271 +0000 UTC m=+0.209899619 container start 2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=nova_compute_init_log, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:08:57 localhost python3[58621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764316155 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova Nov 28 03:08:57 localhost podman[58710]: 2025-11-28 08:08:57.47072541 +0000 UTC m=+0.150156937 container init 8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_virtqemud_init_logs, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, release=1761123044, config_id=tripleo_step2, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git) Nov 28 03:08:57 localhost podman[58710]: 2025-11-28 08:08:57.480076666 +0000 UTC m=+0.159508203 container start 8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_id=tripleo_step2, container_name=nova_virtqemud_init_logs, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-nova-libvirt) Nov 28 03:08:57 localhost python3[58621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764316155 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm Nov 28 03:08:57 localhost systemd[1]: libpod-8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496.scope: Deactivated successfully. Nov 28 03:08:57 localhost podman[58730]: 2025-11-28 08:08:57.510443088 +0000 UTC m=+0.072591978 container died 2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute_init_log, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, managed_by=tripleo_ansible) Nov 28 03:08:57 localhost podman[58730]: 2025-11-28 08:08:57.5401625 +0000 UTC m=+0.102311330 container cleanup 2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step2, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, tcib_managed=true) Nov 28 03:08:57 localhost systemd[1]: libpod-conmon-2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b.scope: Deactivated successfully. Nov 28 03:08:57 localhost podman[58753]: 2025-11-28 08:08:57.591725251 +0000 UTC m=+0.097371088 container died 8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, config_id=tripleo_step2, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 28 03:08:57 localhost podman[58753]: 2025-11-28 08:08:57.61647096 +0000 UTC m=+0.122116717 container cleanup 8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step2, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtqemud_init_logs, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:08:57 localhost systemd[1]: libpod-conmon-8cd7b1e7d0a7d27d2f93a539023775fca4a5591861445449044b4e057c54b496.scope: Deactivated successfully. Nov 28 03:08:57 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.1f scrub ok Nov 28 03:08:57 localhost podman[58878]: 2025-11-28 08:08:57.980679102 +0000 UTC m=+0.071076862 container create 99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step2, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, container_name=create_virtlogd_wrapper, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Nov 28 03:08:58 localhost podman[58879]: 2025-11-28 08:08:58.002235912 +0000 UTC m=+0.086131702 container create b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=create_haproxy_wrapper, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_id=tripleo_step2, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:08:58 localhost systemd[1]: Started libpod-conmon-99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8.scope. Nov 28 03:08:58 localhost systemd[1]: Started libpod-conmon-b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18.scope. Nov 28 03:08:58 localhost systemd[1]: Started libcrun container. Nov 28 03:08:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fa74df54e8af55fadaf755a0b31aeec0d57923866d5b313d9489d8d758e3e8/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff) Nov 28 03:08:58 localhost systemd[1]: Started libcrun container. Nov 28 03:08:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bb70caa6e9f6f4d170c3a5868421bfc24d38542c14f6a27a1edf3bcfdc45d32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 03:08:58 localhost podman[58878]: 2025-11-28 08:08:58.045286133 +0000 UTC m=+0.135683903 container init 99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=create_virtlogd_wrapper, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step2, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}) Nov 28 03:08:58 localhost podman[58879]: 2025-11-28 08:08:58.048936155 +0000 UTC m=+0.132831925 container init b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=create_haproxy_wrapper, config_id=tripleo_step2, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:08:58 localhost podman[58879]: 2025-11-28 08:08:57.952028223 +0000 UTC m=+0.035924003 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 28 03:08:58 localhost podman[58878]: 2025-11-28 08:08:57.952342673 +0000 UTC m=+0.042740453 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:08:58 localhost podman[58878]: 2025-11-28 08:08:58.054754264 +0000 UTC m=+0.145152054 container start 99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, container_name=create_virtlogd_wrapper, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step2, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:08:58 localhost podman[58878]: 2025-11-28 08:08:58.05497898 +0000 UTC m=+0.145376750 container attach 99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:08:58 localhost podman[58879]: 2025-11-28 08:08:58.05690819 +0000 UTC m=+0.140803940 container start b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, container_name=create_haproxy_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com) Nov 28 03:08:58 localhost podman[58879]: 2025-11-28 08:08:58.057111986 +0000 UTC m=+0.141007806 container attach b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_id=tripleo_step2, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper) Nov 28 03:08:58 localhost systemd[1]: var-lib-containers-storage-overlay-25ffb2916f839ff1700adaff5cb29e97302d49ba8ff980d3124f389d659473a3-merged.mount: Deactivated successfully. Nov 28 03:08:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b-userdata-shm.mount: Deactivated successfully. Nov 28 03:08:59 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.18 scrub starts Nov 28 03:08:59 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.18 scrub ok Nov 28 03:08:59 localhost ovs-vsctl[58986]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) Nov 28 03:09:00 localhost systemd[1]: libpod-99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8.scope: Deactivated successfully. Nov 28 03:09:00 localhost systemd[1]: libpod-99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8.scope: Consumed 2.104s CPU time. Nov 28 03:09:00 localhost podman[59130]: 2025-11-28 08:09:00.222503626 +0000 UTC m=+0.039888295 container died 99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, com.redhat.component=openstack-nova-libvirt-container, container_name=create_virtlogd_wrapper, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, batch=17.1_20251118.1) Nov 28 03:09:00 localhost systemd[1]: tmp-crun.8Bxigw.mount: Deactivated successfully. Nov 28 03:09:00 localhost systemd[1]: tmp-crun.arSVSd.mount: Deactivated successfully. Nov 28 03:09:00 localhost podman[59130]: 2025-11-28 08:09:00.260949685 +0000 UTC m=+0.078334334 container cleanup 99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-type=git, config_id=tripleo_step2, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=create_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:09:00 localhost systemd[1]: libpod-conmon-99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8.scope: Deactivated successfully. Nov 28 03:09:00 localhost python3[58621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764316155 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper Nov 28 03:09:00 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.c scrub starts Nov 28 03:09:00 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.c scrub ok Nov 28 03:09:00 localhost systemd[1]: libpod-b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18.scope: Deactivated successfully. Nov 28 03:09:00 localhost systemd[1]: libpod-b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18.scope: Consumed 2.092s CPU time. Nov 28 03:09:00 localhost podman[58879]: 2025-11-28 08:09:00.941265003 +0000 UTC m=+3.025160833 container died b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.4, container_name=create_haproxy_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step2, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12) Nov 28 03:09:01 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.2 scrub starts Nov 28 03:09:01 localhost podman[59169]: 2025-11-28 08:09:01.043739426 +0000 UTC m=+0.090431225 container cleanup b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=create_haproxy_wrapper) Nov 28 03:09:01 localhost systemd[1]: libpod-conmon-b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18.scope: Deactivated successfully. Nov 28 03:09:01 localhost python3[58621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers Nov 28 03:09:01 localhost systemd[1]: var-lib-containers-storage-overlay-8bb70caa6e9f6f4d170c3a5868421bfc24d38542c14f6a27a1edf3bcfdc45d32-merged.mount: Deactivated successfully. Nov 28 03:09:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7e80d2244e97078fdfa0b346326e9e74f6c05087f46da1752d168ebfe5e5e18-userdata-shm.mount: Deactivated successfully. Nov 28 03:09:01 localhost systemd[1]: var-lib-containers-storage-overlay-16fa74df54e8af55fadaf755a0b31aeec0d57923866d5b313d9489d8d758e3e8-merged.mount: Deactivated successfully. Nov 28 03:09:01 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99a1ca315b8742a552d837e443e8b0abb0296bf95f2a35ef9ad3c6a89867d1c8-userdata-shm.mount: Deactivated successfully. Nov 28 03:09:01 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.2 scrub ok Nov 28 03:09:01 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.e scrub starts Nov 28 03:09:01 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.e scrub ok Nov 28 03:09:01 localhost python3[59224]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:09:03 localhost python3[59345]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005538515 step=2 update_config_hash_only=False Nov 28 03:09:03 localhost python3[59361]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:09:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 72 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/33 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=12.102886200s) [3,5,1] r=-1 lpr=72 pi=[56,72)/1 luod=0'0 crt=36'39 mlcod 0'0 active pruub 1237.636230469s@ mbc={}] start_peering_interval up [0,4,5] -> [3,5,1], acting [0,4,5] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:09:04 localhost ceph-osd[33334]: osd.4 pg_epoch: 72 pg[7.e( v 36'39 (0'0,36'39] local-lis/les=56/57 n=1 ec=44/33 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=12.102806091s) [3,5,1] r=-1 lpr=72 pi=[56,72)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1237.636230469s@ mbc={}] state: transitioning to Stray Nov 28 03:09:04 localhost python3[59377]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 28 03:09:04 localhost ceph-osd[32393]: osd.1 pg_epoch: 72 pg[7.e( empty local-lis/les=0/0 n=0 ec=44/33 lis/c=56/56 les/c/f=57/57/0 sis=72) [3,5,1] r=2 lpr=72 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state: transitioning to Stray Nov 28 03:09:05 localhost ceph-osd[32393]: osd.1 pg_epoch: 74 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=58/59 n=1 ec=44/33 lis/c=58/58 les/c/f=59/59/0 sis=74 pruub=10.883349419s) [0,5,1] r=2 lpr=74 pi=[58,74)/1 crt=36'39 mlcod 0'0 active pruub 1242.462890625s@ mbc={255={}}] start_peering_interval up [1,5,3] -> [0,5,1], acting [1,5,3] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015 Nov 28 03:09:05 localhost ceph-osd[32393]: osd.1 pg_epoch: 74 pg[7.f( v 36'39 (0'0,36'39] local-lis/les=58/59 n=1 ec=44/33 lis/c=58/58 les/c/f=59/59/0 sis=74 pruub=10.883151054s) [0,5,1] r=2 lpr=74 pi=[58,74)/1 crt=36'39 mlcod 0'0 unknown NOTIFY pruub 1242.462890625s@ mbc={}] state: transitioning to Stray Nov 28 03:09:08 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.8 scrub starts Nov 28 03:09:08 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.8 scrub ok Nov 28 03:09:08 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.a scrub starts Nov 28 03:09:08 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.a scrub ok Nov 28 03:09:11 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.5 scrub starts Nov 28 03:09:11 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 3.5 scrub ok Nov 28 03:09:14 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 4.a scrub starts Nov 28 03:09:14 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 4.a scrub ok Nov 28 03:09:15 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.7 scrub starts Nov 28 03:09:15 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.7 scrub ok Nov 28 03:09:15 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.7 scrub starts Nov 28 03:09:15 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.7 scrub ok Nov 28 03:09:16 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.c scrub starts Nov 28 03:09:19 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.0 scrub starts Nov 28 03:09:19 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.0 scrub ok Nov 28 03:09:20 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.f scrub starts Nov 28 03:09:20 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.f scrub ok Nov 28 03:09:20 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.15 scrub starts Nov 28 03:09:20 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.15 scrub ok Nov 28 03:09:21 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.d scrub starts Nov 28 03:09:21 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.d scrub ok Nov 28 03:09:23 localhost sshd[59378]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:09:23 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.1a deep-scrub starts Nov 28 03:09:23 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.1a deep-scrub ok Nov 28 03:09:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:09:24 localhost systemd[1]: tmp-crun.rKxtLS.mount: Deactivated successfully. Nov 28 03:09:24 localhost podman[59380]: 2025-11-28 08:09:24.164275225 +0000 UTC m=+0.091620860 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:09:24 localhost podman[59380]: 2025-11-28 08:09:24.383674226 +0000 UTC m=+0.311019831 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:09:24 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:09:25 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.8 scrub starts Nov 28 03:09:25 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.8 scrub ok Nov 28 03:09:25 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.2 scrub starts Nov 28 03:09:25 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.2 scrub ok Nov 28 03:09:26 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.7 scrub starts Nov 28 03:09:26 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.7 scrub ok Nov 28 03:09:28 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts Nov 28 03:09:28 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok Nov 28 03:09:28 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.f scrub starts Nov 28 03:09:30 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts Nov 28 03:09:30 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok Nov 28 03:09:30 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.1 scrub starts Nov 28 03:09:30 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.1 scrub ok Nov 28 03:09:32 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.3 scrub starts Nov 28 03:09:32 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.3 scrub ok Nov 28 03:09:34 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.1c scrub starts Nov 28 03:09:34 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 3.1c scrub ok Nov 28 03:09:34 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.7 scrub starts Nov 28 03:09:34 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.7 scrub ok Nov 28 03:09:36 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.c scrub starts Nov 28 03:09:36 localhost ceph-osd[32393]: log_channel(cluster) log [DBG] : 7.c scrub ok Nov 28 03:09:39 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.3 scrub starts Nov 28 03:09:39 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.3 scrub ok Nov 28 03:09:41 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.c scrub starts Nov 28 03:09:41 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.c scrub ok Nov 28 03:09:42 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.d scrub starts Nov 28 03:09:42 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.d scrub ok Nov 28 03:09:43 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.11 scrub starts Nov 28 03:09:43 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 2.11 scrub ok Nov 28 03:09:46 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.13 scrub starts Nov 28 03:09:46 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.13 scrub ok Nov 28 03:09:48 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.1c scrub starts Nov 28 03:09:48 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.1c scrub ok Nov 28 03:09:50 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.e scrub starts Nov 28 03:09:50 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.e scrub ok Nov 28 03:09:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:09:54 localhost podman[59485]: 2025-11-28 08:09:54.982870772 +0000 UTC m=+0.090365963 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, version=17.1.12, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Nov 28 03:09:55 localhost podman[59485]: 2025-11-28 08:09:55.179629537 +0000 UTC m=+0.287124728 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z) Nov 28 03:09:55 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:09:58 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts Nov 28 03:09:58 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok Nov 28 03:10:02 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.1a scrub starts Nov 28 03:10:02 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 6.1a scrub ok Nov 28 03:10:03 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.10 scrub starts Nov 28 03:10:03 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.10 scrub ok Nov 28 03:10:12 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts Nov 28 03:10:12 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok Nov 28 03:10:13 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.f scrub starts Nov 28 03:10:13 localhost ceph-osd[33334]: log_channel(cluster) log [DBG] : 5.f scrub ok Nov 28 03:10:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:10:25 localhost podman[59515]: 2025-11-28 08:10:25.981979644 +0000 UTC m=+0.093638023 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, release=1761123044) Nov 28 03:10:26 localhost podman[59515]: 2025-11-28 08:10:26.229664912 +0000 UTC m=+0.341323241 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, vcs-type=git, distribution-scope=public) Nov 28 03:10:26 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:10:42 localhost systemd[1]: tmp-crun.1dyme5.mount: Deactivated successfully. Nov 28 03:10:42 localhost podman[59644]: 2025-11-28 08:10:42.218914964 +0000 UTC m=+0.110890892 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, architecture=x86_64, io.openshift.expose-services=, name=rhceph, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git) Nov 28 03:10:42 localhost podman[59644]: 2025-11-28 08:10:42.294658107 +0000 UTC m=+0.186634025 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, release=553, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux ) Nov 28 03:10:55 localhost sshd[59790]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:10:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:10:56 localhost systemd[1]: tmp-crun.yECbQz.mount: Deactivated successfully. Nov 28 03:10:56 localhost podman[59792]: 2025-11-28 08:10:56.866046083 +0000 UTC m=+0.124086354 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, container_name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:10:57 localhost podman[59792]: 2025-11-28 08:10:57.063378773 +0000 UTC m=+0.321419064 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Nov 28 03:10:57 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:11:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:11:27 localhost podman[59819]: 2025-11-28 08:11:27.973246488 +0000 UTC m=+0.083235769 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Nov 28 03:11:28 localhost podman[59819]: 2025-11-28 08:11:28.172418747 +0000 UTC m=+0.282408028 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:11:28 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:11:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:11:58 localhost systemd[1]: tmp-crun.nZdOYV.mount: Deactivated successfully. Nov 28 03:11:58 localhost podman[59925]: 2025-11-28 08:11:58.996916966 +0000 UTC m=+0.095899633 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:11:59 localhost podman[59925]: 2025-11-28 08:11:59.200547217 +0000 UTC m=+0.299529884 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, release=1761123044, name=rhosp17/openstack-qdrouterd) Nov 28 03:11:59 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:12:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:12:29 localhost podman[59955]: 2025-11-28 08:12:29.978429696 +0000 UTC m=+0.084568762 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr) Nov 28 03:12:30 localhost podman[59955]: 2025-11-28 08:12:30.2028306 +0000 UTC m=+0.308969626 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, container_name=metrics_qdr, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team) Nov 28 03:12:30 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:13:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:13:00 localhost podman[60062]: 2025-11-28 08:13:00.973213994 +0000 UTC m=+0.085386366 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:13:01 localhost podman[60062]: 2025-11-28 08:13:01.212593966 +0000 UTC m=+0.324766298 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd) Nov 28 03:13:01 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:13:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:13:31 localhost podman[60092]: 2025-11-28 08:13:31.979716769 +0000 UTC m=+0.080384298 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64) Nov 28 03:13:32 localhost podman[60092]: 2025-11-28 08:13:32.166833116 +0000 UTC m=+0.267500635 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd) Nov 28 03:13:32 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:13:33 localhost python3[60167]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:34 localhost python3[60212]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317613.46885-99577-195677168809152/source _original_basename=tmpyndwrb1i follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:35 localhost python3[60242]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:13:36 localhost ansible-async_wrapper.py[60414]: Invoked with 939045046376 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317616.4082067-99742-204519230252259/AnsiballZ_command.py _ Nov 28 03:13:36 localhost ansible-async_wrapper.py[60417]: Starting module and watcher Nov 28 03:13:36 localhost ansible-async_wrapper.py[60417]: Start watching 60418 (3600) Nov 28 03:13:36 localhost ansible-async_wrapper.py[60418]: Start module (60418) Nov 28 03:13:36 localhost ansible-async_wrapper.py[60414]: Return async_wrapper task started. Nov 28 03:13:37 localhost python3[60438]: ansible-ansible.legacy.async_status Invoked with jid=939045046376.60414 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:13:41 localhost puppet-user[60433]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:13:41 localhost puppet-user[60433]: (file: /etc/puppet/hiera.yaml) Nov 28 03:13:41 localhost puppet-user[60433]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:13:41 localhost puppet-user[60433]: (file & line not available) Nov 28 03:13:41 localhost puppet-user[60433]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:13:41 localhost puppet-user[60433]: (file & line not available) Nov 28 03:13:41 localhost puppet-user[60433]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 28 03:13:41 localhost puppet-user[60433]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 28 03:13:41 localhost puppet-user[60433]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.11 seconds Nov 28 03:13:41 localhost puppet-user[60433]: Notice: Applied catalog in 0.03 seconds Nov 28 03:13:41 localhost puppet-user[60433]: Application: Nov 28 03:13:41 localhost puppet-user[60433]: Initial environment: production Nov 28 03:13:41 localhost puppet-user[60433]: Converged environment: production Nov 28 03:13:41 localhost puppet-user[60433]: Run mode: user Nov 28 03:13:41 localhost puppet-user[60433]: Changes: Nov 28 03:13:41 localhost puppet-user[60433]: Events: Nov 28 03:13:41 localhost puppet-user[60433]: Resources: Nov 28 03:13:41 localhost puppet-user[60433]: Total: 10 Nov 28 03:13:41 localhost puppet-user[60433]: Time: Nov 28 03:13:41 localhost puppet-user[60433]: Schedule: 0.00 Nov 28 03:13:41 localhost puppet-user[60433]: File: 0.00 Nov 28 03:13:41 localhost puppet-user[60433]: Exec: 0.01 Nov 28 03:13:41 localhost puppet-user[60433]: Augeas: 0.01 Nov 28 03:13:41 localhost puppet-user[60433]: Transaction evaluation: 0.02 Nov 28 03:13:41 localhost puppet-user[60433]: Catalog application: 0.03 Nov 28 03:13:41 localhost puppet-user[60433]: Config retrieval: 0.14 Nov 28 03:13:41 localhost puppet-user[60433]: Last run: 1764317621 Nov 28 03:13:41 localhost puppet-user[60433]: Filebucket: 0.00 Nov 28 03:13:41 localhost puppet-user[60433]: Total: 0.04 Nov 28 03:13:41 localhost puppet-user[60433]: Version: Nov 28 03:13:41 localhost puppet-user[60433]: Config: 1764317621 Nov 28 03:13:41 localhost puppet-user[60433]: Puppet: 7.10.0 Nov 28 03:13:41 localhost ansible-async_wrapper.py[60418]: Module complete (60418) Nov 28 03:13:41 localhost ansible-async_wrapper.py[60417]: Done in kid B. Nov 28 03:13:47 localhost python3[60596]: ansible-ansible.legacy.async_status Invoked with jid=939045046376.60414 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:13:48 localhost python3[60657]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:13:48 localhost python3[60674]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:13:49 localhost python3[60724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:49 localhost python3[60742]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmporglf9wi recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:13:50 localhost python3[60772]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:51 localhost python3[60875]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 28 03:13:52 localhost python3[60894]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:53 localhost python3[60926]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:13:54 localhost python3[60976]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:54 localhost python3[60994]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:55 localhost python3[61056]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:55 localhost python3[61074]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:56 localhost python3[61136]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:56 localhost python3[61154]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:57 localhost python3[61216]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:57 localhost python3[61234]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:57 localhost python3[61264]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:13:57 localhost systemd[1]: Reloading. Nov 28 03:13:57 localhost systemd-rc-local-generator[61287]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:13:57 localhost systemd-sysv-generator[61292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:13:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:13:58 localhost python3[61349]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:59 localhost python3[61367]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:13:59 localhost python3[61429]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:13:59 localhost python3[61447]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:00 localhost python3[61477]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:00 localhost systemd[1]: Reloading. Nov 28 03:14:00 localhost systemd-rc-local-generator[61502]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:00 localhost systemd-sysv-generator[61508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:00 localhost systemd[1]: Starting Create netns directory... Nov 28 03:14:00 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 03:14:00 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 03:14:00 localhost systemd[1]: Finished Create netns directory. Nov 28 03:14:01 localhost python3[61537]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 28 03:14:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:14:02 localhost podman[61553]: 2025-11-28 08:14:02.433246278 +0000 UTC m=+0.073273833 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 28 03:14:02 localhost podman[61553]: 2025-11-28 08:14:02.641625618 +0000 UTC m=+0.281653113 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Nov 28 03:14:02 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:14:03 localhost python3[61621]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 28 03:14:03 localhost podman[61756]: 2025-11-28 08:14:03.948737581 +0000 UTC m=+0.063189213 container create 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., container_name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.41.4, distribution-scope=public) Nov 28 03:14:03 localhost podman[61770]: 2025-11-28 08:14:03.983281215 +0000 UTC m=+0.088328829 container create 7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Nov 28 03:14:03 localhost systemd[1]: Started libpod-conmon-2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711.scope. Nov 28 03:14:04 localhost systemd[1]: Started libcrun container. Nov 28 03:14:04 localhost podman[61756]: 2025-11-28 08:14:03.914857678 +0000 UTC m=+0.029309330 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:04 localhost systemd[1]: Started libpod-conmon-7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f.scope. Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost systemd[1]: Started libcrun container. Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b290307da7690cf991f1186b07b34a264d1d07b861913129e99370229181e3a2/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[61756]: 2025-11-28 08:14:04.027398513 +0000 UTC m=+0.141850135 container init 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b290307da7690cf991f1186b07b34a264d1d07b861913129e99370229181e3a2/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b290307da7690cf991f1186b07b34a264d1d07b861913129e99370229181e3a2/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[61756]: 2025-11-28 08:14:04.034652292 +0000 UTC m=+0.149103924 container start 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true, vendor=Red Hat, Inc., container_name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3) Nov 28 03:14:04 localhost podman[61770]: 2025-11-28 08:14:03.94047905 +0000 UTC m=+0.045526734 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:14:04 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:04 localhost podman[61786]: 2025-11-28 08:14:04.049546355 +0000 UTC m=+0.126331064 container create cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd) Nov 28 03:14:04 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:04 localhost systemd[1]: Created slice User Slice of UID 0. Nov 28 03:14:04 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 28 03:14:04 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 28 03:14:04 localhost podman[61786]: 2025-11-28 08:14:04.011364415 +0000 UTC m=+0.088149144 image pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 28 03:14:04 localhost systemd[1]: Starting User Manager for UID 0... Nov 28 03:14:04 localhost systemd[1]: Started libpod-conmon-cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.scope. Nov 28 03:14:04 localhost systemd[1]: Started libcrun container. Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb78a9787fbfdee8df647dff935d3e6e34a25076546a1ccbc8a68d8c48f6925c/merged/scripts supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[61822]: 2025-11-28 08:14:04.040051123 +0000 UTC m=+0.048555018 image pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 28 03:14:04 localhost podman[61821]: 2025-11-28 08:14:04.041262972 +0000 UTC m=+0.047587568 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb78a9787fbfdee8df647dff935d3e6e34a25076546a1ccbc8a68d8c48f6925c/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[61770]: 2025-11-28 08:14:04.155272403 +0000 UTC m=+0.260320037 container init 7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, container_name=nova_statedir_owner, managed_by=tripleo_ansible, architecture=x86_64, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:14:04 localhost podman[61770]: 2025-11-28 08:14:04.194502815 +0000 UTC m=+0.299550449 container start 7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_statedir_owner, batch=17.1_20251118.1) Nov 28 03:14:04 localhost podman[61770]: 2025-11-28 08:14:04.195127316 +0000 UTC m=+0.300174980 container attach 7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=nova_statedir_owner, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:14:04 localhost systemd[1]: libpod-7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f.scope: Deactivated successfully. Nov 28 03:14:04 localhost systemd[61870]: Queued start job for default target Main User Target. Nov 28 03:14:04 localhost systemd[61870]: Created slice User Application Slice. Nov 28 03:14:04 localhost systemd[61870]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 28 03:14:04 localhost systemd[61870]: Started Daily Cleanup of User's Temporary Directories. Nov 28 03:14:04 localhost systemd[61870]: Reached target Paths. Nov 28 03:14:04 localhost systemd[61870]: Reached target Timers. Nov 28 03:14:04 localhost systemd[61870]: Starting D-Bus User Message Bus Socket... Nov 28 03:14:04 localhost systemd[61870]: Starting Create User's Volatile Files and Directories... Nov 28 03:14:04 localhost systemd[61870]: Finished Create User's Volatile Files and Directories. Nov 28 03:14:04 localhost systemd[61870]: Listening on D-Bus User Message Bus Socket. Nov 28 03:14:04 localhost systemd[61870]: Reached target Sockets. Nov 28 03:14:04 localhost systemd[61870]: Reached target Basic System. Nov 28 03:14:04 localhost systemd[61870]: Reached target Main User Target. Nov 28 03:14:04 localhost systemd[61870]: Startup finished in 128ms. Nov 28 03:14:04 localhost systemd[1]: Started User Manager for UID 0. Nov 28 03:14:04 localhost systemd[1]: Started Session c1 of User root. Nov 28 03:14:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:14:04 localhost podman[61786]: 2025-11-28 08:14:04.33703471 +0000 UTC m=+0.413819589 container init cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:14:04 localhost podman[61822]: 2025-11-28 08:14:04.339801038 +0000 UTC m=+0.348304893 container create 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64) Nov 28 03:14:04 localhost podman[61770]: 2025-11-28 08:14:04.348622767 +0000 UTC m=+0.453670481 container died 7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, tcib_managed=true, config_id=tripleo_step3, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_statedir_owner, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:14:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:14:04 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:04 localhost systemd[1]: Started Session c2 of User root. Nov 28 03:14:04 localhost systemd[1]: session-c1.scope: Deactivated successfully. Nov 28 03:14:04 localhost systemd[1]: Started libpod-conmon-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope. Nov 28 03:14:04 localhost podman[61786]: 2025-11-28 08:14:04.407239474 +0000 UTC m=+0.484024223 container start cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container) Nov 28 03:14:04 localhost systemd[1]: Started libcrun container. Nov 28 03:14:04 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[61822]: 2025-11-28 08:14:04.44311381 +0000 UTC m=+0.451617675 container init 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com) Nov 28 03:14:04 localhost systemd[1]: session-c2.scope: Deactivated successfully. Nov 28 03:14:04 localhost podman[61895]: 2025-11-28 08:14:04.473344708 +0000 UTC m=+0.159927616 container cleanup 7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, url=https://www.redhat.com, container_name=nova_statedir_owner, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:14:04 localhost systemd[1]: libpod-conmon-7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f.scope: Deactivated successfully. Nov 28 03:14:04 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1764316155 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py Nov 28 03:14:04 localhost podman[61821]: 2025-11-28 08:14:04.492864336 +0000 UTC m=+0.499188892 container create a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, container_name=ceilometer_init_log, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible) Nov 28 03:14:04 localhost podman[61822]: 2025-11-28 08:14:04.506822339 +0000 UTC m=+0.515326204 container start 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-type=git, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12) Nov 28 03:14:04 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=f62921da3a3d0eed1be38a46b3ed6ac3 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 Nov 28 03:14:04 localhost systemd[1]: Started libpod-conmon-a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1.scope. Nov 28 03:14:04 localhost systemd[1]: libpod-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:04 localhost systemd[1]: Started libcrun container. Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/558adb40dc3f0c457c124ec6699b165daa74a355f52d98e7436d696b86369c63/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[61999]: 2025-11-28 08:14:04.592180322 +0000 UTC m=+0.048659232 container died 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z, name=rhosp17/openstack-rsyslog, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog) Nov 28 03:14:04 localhost podman[61821]: 2025-11-28 08:14:04.614386765 +0000 UTC m=+0.620711331 container init a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_id=tripleo_step3, tcib_managed=true, container_name=ceilometer_init_log, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:14:04 localhost podman[61821]: 2025-11-28 08:14:04.622465661 +0000 UTC m=+0.628790227 container start a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, vcs-type=git, container_name=ceilometer_init_log, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:14:04 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer Nov 28 03:14:04 localhost systemd[1]: libpod-a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1.scope: Deactivated successfully. Nov 28 03:14:04 localhost podman[61919]: 2025-11-28 08:14:04.634906256 +0000 UTC m=+0.252257592 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Nov 28 03:14:04 localhost podman[62059]: 2025-11-28 08:14:04.688536915 +0000 UTC m=+0.045149681 container died a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_init_log, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Nov 28 03:14:04 localhost podman[61999]: 2025-11-28 08:14:04.722555352 +0000 UTC m=+0.179034212 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.12, url=https://www.redhat.com, vcs-type=git, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:14:04 localhost systemd[1]: libpod-conmon-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:04 localhost podman[62107]: 2025-11-28 08:14:04.797744284 +0000 UTC m=+0.076202176 container create f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.) Nov 28 03:14:04 localhost podman[61919]: 2025-11-28 08:14:04.819270205 +0000 UTC m=+0.436621541 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true) Nov 28 03:14:04 localhost systemd[1]: Started libpod-conmon-f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b.scope. Nov 28 03:14:04 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:14:04 localhost podman[62107]: 2025-11-28 08:14:04.753380608 +0000 UTC m=+0.031838500 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:04 localhost podman[62059]: 2025-11-28 08:14:04.86711077 +0000 UTC m=+0.223723566 container cleanup a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step3, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_init_log, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:14:04 localhost systemd[1]: Started libcrun container. Nov 28 03:14:04 localhost systemd[1]: libpod-conmon-a2af07c07192c0af15d6c45b3388461a125bcda5da66447a986ee6fedd37e9e1.scope: Deactivated successfully. Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d65302dd3a585cea223ca3e05b9a858698ed3b54cf3bbf51971fe5feba8f16c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d65302dd3a585cea223ca3e05b9a858698ed3b54cf3bbf51971fe5feba8f16c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d65302dd3a585cea223ca3e05b9a858698ed3b54cf3bbf51971fe5feba8f16c/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d65302dd3a585cea223ca3e05b9a858698ed3b54cf3bbf51971fe5feba8f16c/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:04 localhost podman[62107]: 2025-11-28 08:14:04.933508834 +0000 UTC m=+0.211966696 container init f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Nov 28 03:14:04 localhost podman[62107]: 2025-11-28 08:14:04.956478351 +0000 UTC m=+0.234936233 container start f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1) Nov 28 03:14:04 localhost systemd[1]: var-lib-containers-storage-overlay-b290307da7690cf991f1186b07b34a264d1d07b861913129e99370229181e3a2-merged.mount: Deactivated successfully. Nov 28 03:14:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ab9dfcc9475e410a0c49d77a88cf9c95c3c1b4ce14ef438a666bad57bd45d0f-userdata-shm.mount: Deactivated successfully. Nov 28 03:14:05 localhost podman[62201]: 2025-11-28 08:14:05.331707267 +0000 UTC m=+0.092494771 container create c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12) Nov 28 03:14:05 localhost systemd[1]: Started libpod-conmon-c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808.scope. Nov 28 03:14:05 localhost podman[62201]: 2025-11-28 08:14:05.285542535 +0000 UTC m=+0.046330079 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:05 localhost systemd[1]: Started libcrun container. Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:05 localhost podman[62201]: 2025-11-28 08:14:05.42021747 +0000 UTC m=+0.181004974 container init c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, container_name=nova_virtsecretd, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:14:05 localhost podman[62201]: 2025-11-28 08:14:05.432677085 +0000 UTC m=+0.193464559 container start c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., container_name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step3) Nov 28 03:14:05 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:05 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:05 localhost systemd[1]: Started Session c3 of User root. Nov 28 03:14:05 localhost systemd[1]: session-c3.scope: Deactivated successfully. Nov 28 03:14:05 localhost podman[62334]: 2025-11-28 08:14:05.958689246 +0000 UTC m=+0.092896623 container create 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtnodedevd, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1) Nov 28 03:14:06 localhost podman[62334]: 2025-11-28 08:14:05.910038475 +0000 UTC m=+0.044245892 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:06 localhost systemd[1]: Started libpod-conmon-490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436.scope. Nov 28 03:14:06 localhost systemd[1]: Started libcrun container. Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost podman[62334]: 2025-11-28 08:14:06.056131653 +0000 UTC m=+0.190339010 container init 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtnodedevd, config_id=tripleo_step3) Nov 28 03:14:06 localhost podman[62334]: 2025-11-28 08:14:06.065787368 +0000 UTC m=+0.199994725 container start 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, release=1761123044, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtnodedevd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, vcs-type=git, url=https://www.redhat.com) Nov 28 03:14:06 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:06 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:06 localhost podman[62356]: 2025-11-28 08:14:06.108508622 +0000 UTC m=+0.189749822 container create 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true) Nov 28 03:14:06 localhost podman[62356]: 2025-11-28 08:14:06.013049447 +0000 UTC m=+0.094290687 image pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 28 03:14:06 localhost systemd[1]: Started Session c4 of User root. Nov 28 03:14:06 localhost systemd[1]: Started libpod-conmon-9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.scope. Nov 28 03:14:06 localhost systemd[1]: Started libcrun container. Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e24aa22dcc3c3aeaf326f993725e399e8f3215a32c5fb5c28a2698bed898907/merged/etc/target supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e24aa22dcc3c3aeaf326f993725e399e8f3215a32c5fb5c28a2698bed898907/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:14:06 localhost podman[62356]: 2025-11-28 08:14:06.207495386 +0000 UTC m=+0.288736576 container init 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:14:06 localhost systemd[1]: session-c4.scope: Deactivated successfully. Nov 28 03:14:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:14:06 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:06 localhost systemd[1]: Started Session c5 of User root. Nov 28 03:14:06 localhost podman[62356]: 2025-11-28 08:14:06.305043026 +0000 UTC m=+0.386284216 container start 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, tcib_managed=true, architecture=x86_64, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:14:06 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=18a2751501986164e709168f53ab57c8 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 Nov 28 03:14:06 localhost podman[62431]: 2025-11-28 08:14:06.356057162 +0000 UTC m=+0.097665505 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid) Nov 28 03:14:06 localhost systemd[1]: session-c5.scope: Deactivated successfully. Nov 28 03:14:06 localhost kernel: Loading iSCSI transport class v2.0-870. Nov 28 03:14:06 localhost podman[62431]: 2025-11-28 08:14:06.441612071 +0000 UTC m=+0.183220374 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 03:14:06 localhost podman[62431]: unhealthy Nov 28 03:14:06 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:14:06 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Failed with result 'exit-code'. Nov 28 03:14:06 localhost podman[62523]: 2025-11-28 08:14:06.802304876 +0000 UTC m=+0.083958690 container create 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:14:06 localhost systemd[1]: Started libpod-conmon-77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0.scope. Nov 28 03:14:06 localhost podman[62523]: 2025-11-28 08:14:06.753512291 +0000 UTC m=+0.035166165 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:06 localhost systemd[1]: Started libcrun container. Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:06 localhost podman[62523]: 2025-11-28 08:14:06.883199759 +0000 UTC m=+0.164853593 container init 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtstoraged, config_id=tripleo_step3, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt) Nov 28 03:14:06 localhost podman[62523]: 2025-11-28 08:14:06.893148464 +0000 UTC m=+0.174802298 container start 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt) Nov 28 03:14:06 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:06 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:06 localhost systemd[1]: Started Session c6 of User root. Nov 28 03:14:07 localhost systemd[1]: session-c6.scope: Deactivated successfully. Nov 28 03:14:07 localhost podman[62624]: 2025-11-28 08:14:07.398139639 +0000 UTC m=+0.085216171 container create 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, tcib_managed=true, container_name=nova_virtqemud, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 28 03:14:07 localhost systemd[1]: Started libpod-conmon-929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432.scope. Nov 28 03:14:07 localhost podman[62624]: 2025-11-28 08:14:07.358972069 +0000 UTC m=+0.046048601 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:07 localhost systemd[1]: Started libcrun container. Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:07 localhost podman[62624]: 2025-11-28 08:14:07.506348677 +0000 UTC m=+0.193425229 container init 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 28 03:14:07 localhost podman[62624]: 2025-11-28 08:14:07.51593836 +0000 UTC m=+0.203014902 container start 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, container_name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:14:07 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:07 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:07 localhost systemd[1]: Started Session c7 of User root. Nov 28 03:14:07 localhost systemd[1]: session-c7.scope: Deactivated successfully. Nov 28 03:14:08 localhost podman[62728]: 2025-11-28 08:14:08.041324222 +0000 UTC m=+0.106889297 container create 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=nova_virtproxyd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3) Nov 28 03:14:08 localhost systemd[1]: Started libpod-conmon-7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657.scope. Nov 28 03:14:08 localhost podman[62728]: 2025-11-28 08:14:07.991746942 +0000 UTC m=+0.057312027 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:08 localhost systemd[1]: Started libcrun container. Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:08 localhost podman[62728]: 2025-11-28 08:14:08.146310867 +0000 UTC m=+0.211875932 container init 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, build-date=2025-11-19T00:35:22Z, container_name=nova_virtproxyd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:14:08 localhost podman[62728]: 2025-11-28 08:14:08.156344025 +0000 UTC m=+0.221909090 container start 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_virtproxyd, vcs-type=git, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}) Nov 28 03:14:08 localhost python3[61621]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:14:08 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:14:08 localhost systemd[1]: Started Session c8 of User root. Nov 28 03:14:08 localhost systemd[1]: session-c8.scope: Deactivated successfully. Nov 28 03:14:08 localhost python3[62808]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:09 localhost python3[62824]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:09 localhost python3[62840]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:09 localhost python3[62856]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:09 localhost python3[62872]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:10 localhost python3[62888]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:10 localhost python3[62904]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:10 localhost python3[62920]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:10 localhost python3[62936]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:11 localhost python3[62952]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:11 localhost python3[62968]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:11 localhost python3[62984]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:11 localhost python3[63000]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:12 localhost python3[63016]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:12 localhost python3[63032]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:12 localhost python3[63048]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:12 localhost python3[63064]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:13 localhost python3[63080]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:14:13 localhost python3[63141]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:14 localhost python3[63170]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:14 localhost python3[63199]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:15 localhost python3[63228]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:15 localhost python3[63257]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:16 localhost python3[63286]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:16 localhost python3[63315]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:17 localhost python3[63344]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:17 localhost python3[63373]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317653.2087498-101042-145141859775249/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:18 localhost python3[63389]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 03:14:18 localhost systemd[1]: Reloading. Nov 28 03:14:18 localhost systemd-rc-local-generator[63410]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:18 localhost systemd-sysv-generator[63414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:18 localhost systemd[1]: Stopping User Manager for UID 0... Nov 28 03:14:18 localhost systemd[61870]: Activating special unit Exit the Session... Nov 28 03:14:18 localhost systemd[61870]: Stopped target Main User Target. Nov 28 03:14:18 localhost systemd[61870]: Stopped target Basic System. Nov 28 03:14:18 localhost systemd[61870]: Stopped target Paths. Nov 28 03:14:18 localhost systemd[61870]: Stopped target Sockets. Nov 28 03:14:18 localhost systemd[61870]: Stopped target Timers. Nov 28 03:14:18 localhost systemd[61870]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 03:14:18 localhost systemd[61870]: Closed D-Bus User Message Bus Socket. Nov 28 03:14:18 localhost systemd[61870]: Stopped Create User's Volatile Files and Directories. Nov 28 03:14:18 localhost systemd[61870]: Removed slice User Application Slice. Nov 28 03:14:18 localhost systemd[61870]: Reached target Shutdown. Nov 28 03:14:18 localhost systemd[61870]: Finished Exit the Session. Nov 28 03:14:18 localhost systemd[61870]: Reached target Exit the Session. Nov 28 03:14:18 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 28 03:14:18 localhost systemd[1]: Stopped User Manager for UID 0. Nov 28 03:14:18 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 28 03:14:18 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 28 03:14:18 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 28 03:14:18 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 28 03:14:18 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 28 03:14:19 localhost python3[63442]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:19 localhost systemd[1]: Reloading. Nov 28 03:14:19 localhost systemd-rc-local-generator[63462]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:19 localhost systemd-sysv-generator[63468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:19 localhost systemd[1]: Starting collectd container... Nov 28 03:14:19 localhost systemd[1]: Started collectd container. Nov 28 03:14:20 localhost python3[63511]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:20 localhost systemd[1]: Reloading. Nov 28 03:14:20 localhost systemd-sysv-generator[63540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:20 localhost systemd-rc-local-generator[63537]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:20 localhost systemd[1]: Starting iscsid container... Nov 28 03:14:20 localhost systemd[1]: Started iscsid container. Nov 28 03:14:21 localhost python3[63578]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:21 localhost systemd[1]: Reloading. Nov 28 03:14:21 localhost systemd-sysv-generator[63611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:21 localhost systemd-rc-local-generator[63607]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:21 localhost systemd[1]: Starting nova_virtlogd_wrapper container... Nov 28 03:14:22 localhost systemd[1]: Started nova_virtlogd_wrapper container. Nov 28 03:14:22 localhost python3[63646]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:22 localhost systemd[1]: Reloading. Nov 28 03:14:22 localhost systemd-rc-local-generator[63671]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:22 localhost systemd-sysv-generator[63677]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:23 localhost systemd[1]: Starting nova_virtnodedevd container... Nov 28 03:14:23 localhost tripleo-start-podman-container[63686]: Creating additional drop-in dependency for "nova_virtnodedevd" (490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436) Nov 28 03:14:23 localhost systemd[1]: Reloading. Nov 28 03:14:23 localhost systemd-sysv-generator[63746]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:23 localhost systemd-rc-local-generator[63740]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:23 localhost systemd[1]: Started nova_virtnodedevd container. Nov 28 03:14:24 localhost python3[63770]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:25 localhost systemd[1]: Reloading. Nov 28 03:14:25 localhost systemd-rc-local-generator[63800]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:25 localhost systemd-sysv-generator[63804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:25 localhost systemd[1]: Starting nova_virtproxyd container... Nov 28 03:14:25 localhost tripleo-start-podman-container[63811]: Creating additional drop-in dependency for "nova_virtproxyd" (7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657) Nov 28 03:14:25 localhost systemd[1]: Reloading. Nov 28 03:14:25 localhost systemd-sysv-generator[63873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:25 localhost systemd-rc-local-generator[63867]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:26 localhost systemd[1]: Started nova_virtproxyd container. Nov 28 03:14:26 localhost python3[63894]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:26 localhost systemd[1]: Reloading. Nov 28 03:14:26 localhost systemd-sysv-generator[63923]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:26 localhost systemd-rc-local-generator[63920]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:27 localhost systemd[1]: Starting nova_virtqemud container... Nov 28 03:14:27 localhost tripleo-start-podman-container[63934]: Creating additional drop-in dependency for "nova_virtqemud" (929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432) Nov 28 03:14:27 localhost systemd[1]: Reloading. Nov 28 03:14:27 localhost systemd-sysv-generator[63995]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:27 localhost systemd-rc-local-generator[63992]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:27 localhost systemd[1]: Started nova_virtqemud container. Nov 28 03:14:28 localhost python3[64019]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:28 localhost systemd[1]: Reloading. Nov 28 03:14:28 localhost systemd-rc-local-generator[64044]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:28 localhost systemd-sysv-generator[64049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:28 localhost systemd[1]: Starting nova_virtsecretd container... Nov 28 03:14:28 localhost tripleo-start-podman-container[64059]: Creating additional drop-in dependency for "nova_virtsecretd" (c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808) Nov 28 03:14:28 localhost systemd[1]: Reloading. Nov 28 03:14:28 localhost systemd-rc-local-generator[64113]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:28 localhost systemd-sysv-generator[64118]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:29 localhost systemd[1]: Started nova_virtsecretd container. Nov 28 03:14:29 localhost python3[64140]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:29 localhost systemd[1]: Reloading. Nov 28 03:14:29 localhost systemd-sysv-generator[64168]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:29 localhost systemd-rc-local-generator[64164]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:30 localhost systemd[1]: Starting nova_virtstoraged container... Nov 28 03:14:30 localhost tripleo-start-podman-container[64180]: Creating additional drop-in dependency for "nova_virtstoraged" (77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0) Nov 28 03:14:30 localhost systemd[1]: Reloading. Nov 28 03:14:30 localhost systemd-rc-local-generator[64237]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:30 localhost systemd-sysv-generator[64242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:30 localhost systemd[1]: Started nova_virtstoraged container. Nov 28 03:14:31 localhost python3[64263]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:14:31 localhost systemd[1]: Reloading. Nov 28 03:14:31 localhost systemd-sysv-generator[64292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:14:31 localhost systemd-rc-local-generator[64289]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:14:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:14:31 localhost systemd[1]: Starting rsyslog container... Nov 28 03:14:31 localhost systemd[1]: Started libcrun container. Nov 28 03:14:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:31 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:31 localhost podman[64303]: 2025-11-28 08:14:31.746609169 +0000 UTC m=+0.139035864 container init 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, io.openshift.expose-services=, release=1761123044, vcs-type=git, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:14:31 localhost podman[64303]: 2025-11-28 08:14:31.757208915 +0000 UTC m=+0.149635600 container start 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, config_id=tripleo_step3, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:14:31 localhost podman[64303]: rsyslog Nov 28 03:14:31 localhost systemd[1]: Started rsyslog container. Nov 28 03:14:31 localhost systemd[1]: libpod-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:31 localhost podman[64328]: 2025-11-28 08:14:31.871143224 +0000 UTC m=+0.032685666 container died 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1) Nov 28 03:14:31 localhost podman[64328]: 2025-11-28 08:14:31.891240101 +0000 UTC m=+0.052782543 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, build-date=2025-11-18T22:49:49Z, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, distribution-scope=public, container_name=rsyslog, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git) Nov 28 03:14:31 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:14:31 localhost podman[64351]: 2025-11-28 08:14:31.956610451 +0000 UTC m=+0.038987665 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z) Nov 28 03:14:31 localhost podman[64351]: rsyslog Nov 28 03:14:31 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 28 03:14:32 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1. Nov 28 03:14:32 localhost systemd[1]: Stopped rsyslog container. Nov 28 03:14:32 localhost systemd[1]: Starting rsyslog container... Nov 28 03:14:32 localhost systemd[1]: Started libcrun container. Nov 28 03:14:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:32 localhost podman[64379]: 2025-11-28 08:14:32.211797825 +0000 UTC m=+0.120061034 container init 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12) Nov 28 03:14:32 localhost podman[64379]: 2025-11-28 08:14:32.21889716 +0000 UTC m=+0.127160359 container start 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container) Nov 28 03:14:32 localhost podman[64379]: rsyslog Nov 28 03:14:32 localhost systemd[1]: Started rsyslog container. Nov 28 03:14:32 localhost python3[64380]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:32 localhost systemd[1]: libpod-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:32 localhost podman[64402]: 2025-11-28 08:14:32.386915201 +0000 UTC m=+0.053021200 container died 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, container_name=rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, com.redhat.component=openstack-rsyslog-container) Nov 28 03:14:32 localhost podman[64402]: 2025-11-28 08:14:32.424492131 +0000 UTC m=+0.090598090 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, container_name=rsyslog, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:14:32 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:14:32 localhost podman[64415]: 2025-11-28 08:14:32.511061584 +0000 UTC m=+0.054192458 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, release=1761123044, tcib_managed=true, build-date=2025-11-18T22:49:49Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog) Nov 28 03:14:32 localhost podman[64415]: rsyslog Nov 28 03:14:32 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 28 03:14:32 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2. Nov 28 03:14:32 localhost systemd[1]: Stopped rsyslog container. Nov 28 03:14:32 localhost systemd[1]: Starting rsyslog container... Nov 28 03:14:32 localhost systemd[1]: var-lib-containers-storage-overlay-78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38-merged.mount: Deactivated successfully. Nov 28 03:14:32 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec-userdata-shm.mount: Deactivated successfully. Nov 28 03:14:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:14:32 localhost systemd[1]: Started libcrun container. Nov 28 03:14:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:32 localhost podman[64484]: 2025-11-28 08:14:32.768624802 +0000 UTC m=+0.073907362 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:14:32 localhost podman[64460]: 2025-11-28 08:14:32.790455404 +0000 UTC m=+0.150852630 container init 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Nov 28 03:14:32 localhost podman[64460]: 2025-11-28 08:14:32.799980245 +0000 UTC m=+0.160377411 container start 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=rsyslog, build-date=2025-11-18T22:49:49Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:14:32 localhost podman[64460]: rsyslog Nov 28 03:14:32 localhost systemd[1]: Started rsyslog container. Nov 28 03:14:32 localhost systemd[1]: libpod-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:32 localhost podman[64525]: 2025-11-28 08:14:32.937990177 +0000 UTC m=+0.037036714 container died 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}) Nov 28 03:14:32 localhost podman[64525]: 2025-11-28 08:14:32.959709074 +0000 UTC m=+0.058755601 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, url=https://www.redhat.com, container_name=rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:14:32 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:14:32 localhost podman[64484]: 2025-11-28 08:14:32.979908225 +0000 UTC m=+0.285190835 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd) Nov 28 03:14:32 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:14:33 localhost podman[64552]: 2025-11-28 08:14:33.040643318 +0000 UTC m=+0.053478725 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, container_name=rsyslog, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, config_id=tripleo_step3) Nov 28 03:14:33 localhost podman[64552]: rsyslog Nov 28 03:14:33 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 28 03:14:33 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3. Nov 28 03:14:33 localhost systemd[1]: Stopped rsyslog container. Nov 28 03:14:33 localhost systemd[1]: Starting rsyslog container... Nov 28 03:14:33 localhost systemd[1]: Started libcrun container. Nov 28 03:14:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:33 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:33 localhost podman[64607]: 2025-11-28 08:14:33.514920271 +0000 UTC m=+0.122536902 container init 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, release=1761123044, architecture=x86_64, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 rsyslog) Nov 28 03:14:33 localhost podman[64607]: 2025-11-28 08:14:33.523602346 +0000 UTC m=+0.131218977 container start 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, batch=17.1_20251118.1, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:14:33 localhost podman[64607]: rsyslog Nov 28 03:14:33 localhost systemd[1]: Started rsyslog container. Nov 28 03:14:33 localhost systemd[1]: libpod-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:33 localhost python3[64639]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005538515 step=3 update_config_hash_only=False Nov 28 03:14:33 localhost podman[64646]: 2025-11-28 08:14:33.686370822 +0000 UTC m=+0.052429792 container died 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, container_name=rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:14:33 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec-userdata-shm.mount: Deactivated successfully. Nov 28 03:14:33 localhost systemd[1]: var-lib-containers-storage-overlay-78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38-merged.mount: Deactivated successfully. Nov 28 03:14:33 localhost podman[64646]: 2025-11-28 08:14:33.710574269 +0000 UTC m=+0.076633209 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:14:33 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:14:33 localhost podman[64660]: 2025-11-28 08:14:33.766215691 +0000 UTC m=+0.036611141 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog) Nov 28 03:14:33 localhost podman[64660]: rsyslog Nov 28 03:14:33 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 28 03:14:34 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4. Nov 28 03:14:34 localhost systemd[1]: Stopped rsyslog container. Nov 28 03:14:34 localhost systemd[1]: Starting rsyslog container... Nov 28 03:14:34 localhost systemd[1]: Started libcrun container. Nov 28 03:14:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff) Nov 28 03:14:34 localhost podman[64671]: 2025-11-28 08:14:34.260019152 +0000 UTC m=+0.119239868 container init 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, version=17.1.12, container_name=rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1761123044) Nov 28 03:14:34 localhost podman[64671]: 2025-11-28 08:14:34.268448839 +0000 UTC m=+0.127669565 container start 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, container_name=rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog) Nov 28 03:14:34 localhost podman[64671]: rsyslog Nov 28 03:14:34 localhost systemd[1]: Started rsyslog container. Nov 28 03:14:34 localhost systemd[1]: libpod-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec.scope: Deactivated successfully. Nov 28 03:14:34 localhost podman[64694]: 2025-11-28 08:14:34.430005776 +0000 UTC m=+0.050065697 container died 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, build-date=2025-11-18T22:49:49Z, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, io.buildah.version=1.41.4) Nov 28 03:14:34 localhost podman[64694]: 2025-11-28 08:14:34.455121522 +0000 UTC m=+0.075181423 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, container_name=rsyslog, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z) Nov 28 03:14:34 localhost systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:14:34 localhost podman[64721]: 2025-11-28 08:14:34.538776522 +0000 UTC m=+0.053794235 container cleanup 45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, container_name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'f62921da3a3d0eed1be38a46b3ed6ac3'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog) Nov 28 03:14:34 localhost podman[64721]: rsyslog Nov 28 03:14:34 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 28 03:14:34 localhost python3[64722]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:14:34 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45cbc87a49d83485e7046909600fab5f788908e8e4b6bf89504ff818c1bf66ec-userdata-shm.mount: Deactivated successfully. Nov 28 03:14:34 localhost systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5. Nov 28 03:14:34 localhost systemd[1]: Stopped rsyslog container. Nov 28 03:14:34 localhost systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly. Nov 28 03:14:34 localhost systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'. Nov 28 03:14:34 localhost systemd[1]: Failed to start rsyslog container. Nov 28 03:14:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:14:34 localhost python3[64747]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 28 03:14:34 localhost podman[64748]: 2025-11-28 08:14:34.972704706 +0000 UTC m=+0.078043193 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team) Nov 28 03:14:34 localhost podman[64748]: 2025-11-28 08:14:34.983394765 +0000 UTC m=+0.088733232 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, tcib_managed=true, distribution-scope=public, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:14:34 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:14:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:14:36 localhost podman[64768]: 2025-11-28 08:14:36.964871689 +0000 UTC m=+0.077956860 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, io.openshift.expose-services=, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1) Nov 28 03:14:36 localhost podman[64768]: 2025-11-28 08:14:36.979649167 +0000 UTC m=+0.092734318 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Nov 28 03:14:36 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:14:57 localhost sshd[64864]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:15:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:15:03 localhost systemd[1]: tmp-crun.HYw7Xj.mount: Deactivated successfully. Nov 28 03:15:03 localhost podman[64866]: 2025-11-28 08:15:03.986011595 +0000 UTC m=+0.096061760 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1) Nov 28 03:15:04 localhost podman[64866]: 2025-11-28 08:15:04.191807063 +0000 UTC m=+0.301857208 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12) Nov 28 03:15:04 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:15:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:15:05 localhost podman[64895]: 2025-11-28 08:15:05.959371999 +0000 UTC m=+0.073341079 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3) Nov 28 03:15:05 localhost podman[64895]: 2025-11-28 08:15:05.971444102 +0000 UTC m=+0.085413182 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=) Nov 28 03:15:05 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:15:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:15:07 localhost podman[64915]: 2025-11-28 08:15:07.973363226 +0000 UTC m=+0.081061058 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z) Nov 28 03:15:07 localhost podman[64915]: 2025-11-28 08:15:07.982474766 +0000 UTC m=+0.090172588 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Nov 28 03:15:07 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:15:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:15:34 localhost podman[64935]: 2025-11-28 08:15:34.970577134 +0000 UTC m=+0.082154841 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64) Nov 28 03:15:35 localhost podman[64935]: 2025-11-28 08:15:35.19257221 +0000 UTC m=+0.304149907 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public) Nov 28 03:15:35 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:15:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:15:36 localhost podman[64964]: 2025-11-28 08:15:36.974875972 +0000 UTC m=+0.086156674 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Nov 28 03:15:37 localhost podman[64964]: 2025-11-28 08:15:37.007892378 +0000 UTC m=+0.119173030 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git) Nov 28 03:15:37 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:15:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:15:38 localhost podman[64984]: 2025-11-28 08:15:38.967845881 +0000 UTC m=+0.077547870 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true) Nov 28 03:15:39 localhost podman[64984]: 2025-11-28 08:15:39.002457336 +0000 UTC m=+0.112159275 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Nov 28 03:15:39 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:16:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:16:05 localhost podman[65080]: 2025-11-28 08:16:05.98314379 +0000 UTC m=+0.086043991 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true) Nov 28 03:16:06 localhost podman[65080]: 2025-11-28 08:16:06.187520365 +0000 UTC m=+0.290420576 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:16:06 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:16:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:16:07 localhost podman[65109]: 2025-11-28 08:16:07.978014408 +0000 UTC m=+0.081712118 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd) Nov 28 03:16:08 localhost podman[65109]: 2025-11-28 08:16:08.019543947 +0000 UTC m=+0.123241697 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:16:08 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:16:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:16:09 localhost podman[65128]: 2025-11-28 08:16:09.964821357 +0000 UTC m=+0.076388404 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, name=rhosp17/openstack-iscsid, tcib_managed=true, managed_by=tripleo_ansible) Nov 28 03:16:10 localhost podman[65128]: 2025-11-28 08:16:10.003453127 +0000 UTC m=+0.115020144 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3) Nov 28 03:16:10 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:16:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:16:36 localhost podman[65149]: 2025-11-28 08:16:36.958317343 +0000 UTC m=+0.064937251 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true) Nov 28 03:16:37 localhost podman[65149]: 2025-11-28 08:16:37.148630665 +0000 UTC m=+0.255250553 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 28 03:16:37 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:16:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:16:38 localhost systemd[1]: tmp-crun.j8kpdO.mount: Deactivated successfully. Nov 28 03:16:38 localhost podman[65178]: 2025-11-28 08:16:38.989821119 +0000 UTC m=+0.100033742 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12) Nov 28 03:16:39 localhost podman[65178]: 2025-11-28 08:16:39.027499779 +0000 UTC m=+0.137712362 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:16:39 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:16:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:16:40 localhost systemd[1]: tmp-crun.BLZT6R.mount: Deactivated successfully. Nov 28 03:16:40 localhost podman[65198]: 2025-11-28 08:16:40.977855365 +0000 UTC m=+0.085245286 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, tcib_managed=true, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64) Nov 28 03:16:41 localhost podman[65198]: 2025-11-28 08:16:41.015482995 +0000 UTC m=+0.122872976 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible) Nov 28 03:16:41 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:17:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:17:07 localhost systemd[1]: tmp-crun.VpkTnJ.mount: Deactivated successfully. Nov 28 03:17:07 localhost podman[65295]: 2025-11-28 08:17:07.986381786 +0000 UTC m=+0.091478838 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, name=rhosp17/openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64) Nov 28 03:17:08 localhost podman[65295]: 2025-11-28 08:17:08.208507047 +0000 UTC m=+0.313604069 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:17:08 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:17:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:17:09 localhost podman[65324]: 2025-11-28 08:17:09.975982081 +0000 UTC m=+0.085183645 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, architecture=x86_64, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:17:09 localhost podman[65324]: 2025-11-28 08:17:09.991494398 +0000 UTC m=+0.100695992 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:17:10 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:17:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:17:11 localhost systemd[1]: tmp-crun.qeXvAz.mount: Deactivated successfully. Nov 28 03:17:11 localhost podman[65344]: 2025-11-28 08:17:11.994429144 +0000 UTC m=+0.091274302 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044) Nov 28 03:17:12 localhost podman[65344]: 2025-11-28 08:17:12.007349152 +0000 UTC m=+0.104194320 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Nov 28 03:17:12 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:17:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:17:38 localhost podman[65363]: 2025-11-28 08:17:38.980778042 +0000 UTC m=+0.085233105 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:17:39 localhost podman[65363]: 2025-11-28 08:17:39.208715233 +0000 UTC m=+0.313170266 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:17:39 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:17:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:17:40 localhost podman[65392]: 2025-11-28 08:17:40.979950712 +0000 UTC m=+0.086031710 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, container_name=collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Nov 28 03:17:41 localhost podman[65392]: 2025-11-28 08:17:41.01849405 +0000 UTC m=+0.124575038 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Nov 28 03:17:41 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:17:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:17:42 localhost systemd[1]: tmp-crun.7xvp1M.mount: Deactivated successfully. Nov 28 03:17:42 localhost podman[65414]: 2025-11-28 08:17:42.978841464 +0000 UTC m=+0.087864648 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044) Nov 28 03:17:43 localhost podman[65414]: 2025-11-28 08:17:43.017620037 +0000 UTC m=+0.126643211 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, version=17.1.12, release=1761123044, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team) Nov 28 03:17:43 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:18:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:18:09 localhost podman[65558]: 2025-11-28 08:18:09.973628451 +0000 UTC m=+0.081899373 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 28 03:18:10 localhost podman[65558]: 2025-11-28 08:18:10.18954236 +0000 UTC m=+0.297813232 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:18:10 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:18:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:18:11 localhost podman[65588]: 2025-11-28 08:18:11.9854435 +0000 UTC m=+0.089480657 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Nov 28 03:18:12 localhost podman[65588]: 2025-11-28 08:18:12.025454001 +0000 UTC m=+0.129491148 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd) Nov 28 03:18:12 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:18:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:18:13 localhost podman[65608]: 2025-11-28 08:18:13.970062701 +0000 UTC m=+0.080542051 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z) Nov 28 03:18:13 localhost podman[65608]: 2025-11-28 08:18:13.98527649 +0000 UTC m=+0.095755860 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:18:13 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:18:28 localhost python3[65674]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:18:28 localhost python3[65719]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317908.1914458-108171-18257053407017/source _original_basename=tmpbdnwfiej follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:18:30 localhost python3[65781]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:18:30 localhost python3[65824]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317909.825541-108369-96830662867845/source _original_basename=tmpuzj_x7mm follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:18:31 localhost python3[65886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:18:31 localhost python3[65929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317910.7844563-108426-40749297093871/source _original_basename=tmpeypq69hq follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:18:32 localhost python3[65991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:18:32 localhost python3[66034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317911.738824-108485-115760241104859/source _original_basename=tmplnaudq30 follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:18:32 localhost python3[66064]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 28 03:18:32 localhost systemd[1]: Reloading. Nov 28 03:18:33 localhost systemd-sysv-generator[66088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:33 localhost systemd-rc-local-generator[66081]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:33 localhost systemd[1]: Reloading. Nov 28 03:18:33 localhost systemd-rc-local-generator[66125]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:33 localhost systemd-sysv-generator[66128]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:18:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 4386 writes, 20K keys, 4386 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4386 writes, 493 syncs, 8.90 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 339 writes, 797 keys, 339 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s#012Interval WAL: 339 writes, 168 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:18:34 localhost python3[66152]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:18:34 localhost systemd[1]: Reloading. Nov 28 03:18:34 localhost systemd-sysv-generator[66177]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:34 localhost systemd-rc-local-generator[66173]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:34 localhost systemd[1]: Reloading. Nov 28 03:18:34 localhost systemd-sysv-generator[66220]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:34 localhost systemd-rc-local-generator[66215]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:34 localhost systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m. Nov 28 03:18:35 localhost python3[66243]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 03:18:35 localhost systemd[1]: Reloading. Nov 28 03:18:35 localhost systemd-rc-local-generator[66268]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:35 localhost systemd-sysv-generator[66273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:35 localhost python3[66327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:18:36 localhost python3[66370]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317915.5509162-108604-164968456273423/source _original_basename=tmpwh3cy1n5 follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:18:36 localhost python3[66400]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:18:36 localhost systemd[1]: Reloading. Nov 28 03:18:36 localhost systemd-rc-local-generator[66425]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:36 localhost systemd-sysv-generator[66432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:37 localhost systemd[1]: Reached target tripleo_nova_libvirt.target. Nov 28 03:18:37 localhost python3[66456]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:18:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:18:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 5246 writes, 23K keys, 5246 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5246 writes, 540 syncs, 9.71 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 316 writes, 658 keys, 316 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 316 writes, 158 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:18:39 localhost ansible-async_wrapper.py[66628]: Invoked with 399744111602 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317918.7687347-108730-106179205292456/AnsiballZ_command.py _ Nov 28 03:18:39 localhost ansible-async_wrapper.py[66631]: Starting module and watcher Nov 28 03:18:39 localhost ansible-async_wrapper.py[66631]: Start watching 66632 (3600) Nov 28 03:18:39 localhost ansible-async_wrapper.py[66632]: Start module (66632) Nov 28 03:18:39 localhost ansible-async_wrapper.py[66628]: Return async_wrapper task started. Nov 28 03:18:39 localhost python3[66649]: ansible-ansible.legacy.async_status Invoked with jid=399744111602.66628 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:18:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:18:41 localhost systemd[1]: tmp-crun.p7cccb.mount: Deactivated successfully. Nov 28 03:18:41 localhost podman[66668]: 2025-11-28 08:18:41.013957983 +0000 UTC m=+0.114316342 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:18:41 localhost podman[66668]: 2025-11-28 08:18:41.224247329 +0000 UTC m=+0.324605708 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, batch=17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4) Nov 28 03:18:41 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:18:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:18:42 localhost podman[66735]: 2025-11-28 08:18:42.271315447 +0000 UTC m=+0.087491776 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044) Nov 28 03:18:42 localhost podman[66735]: 2025-11-28 08:18:42.287946609 +0000 UTC m=+0.104122938 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Nov 28 03:18:42 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:18:43 localhost puppet-user[66652]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:18:43 localhost puppet-user[66652]: (file: /etc/puppet/hiera.yaml) Nov 28 03:18:43 localhost puppet-user[66652]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:18:43 localhost puppet-user[66652]: (file & line not available) Nov 28 03:18:43 localhost puppet-user[66652]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:18:43 localhost puppet-user[66652]: (file & line not available) Nov 28 03:18:43 localhost puppet-user[66652]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 28 03:18:43 localhost puppet-user[66652]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:18:43 localhost puppet-user[66652]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:18:43 localhost puppet-user[66652]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:18:43 localhost puppet-user[66652]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:18:43 localhost puppet-user[66652]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:18:43 localhost puppet-user[66652]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:18:43 localhost puppet-user[66652]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:18:43 localhost puppet-user[66652]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:18:43 localhost puppet-user[66652]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:18:43 localhost puppet-user[66652]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:18:43 localhost puppet-user[66652]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:18:43 localhost puppet-user[66652]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:18:43 localhost puppet-user[66652]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:18:43 localhost puppet-user[66652]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:18:43 localhost puppet-user[66652]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:18:43 localhost puppet-user[66652]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:18:43 localhost puppet-user[66652]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:18:43 localhost puppet-user[66652]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 28 03:18:43 localhost puppet-user[66652]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.23 seconds Nov 28 03:18:44 localhost ansible-async_wrapper.py[66631]: 66632 still running (3600) Nov 28 03:18:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:18:44 localhost systemd[1]: tmp-crun.eKgWoM.mount: Deactivated successfully. Nov 28 03:18:45 localhost podman[66823]: 2025-11-28 08:18:45.001750308 +0000 UTC m=+0.109813374 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, url=https://www.redhat.com, vcs-type=git) Nov 28 03:18:45 localhost podman[66823]: 2025-11-28 08:18:45.009856228 +0000 UTC m=+0.117919294 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, vcs-type=git, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:18:45 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:18:49 localhost ansible-async_wrapper.py[66631]: 66632 still running (3595) Nov 28 03:18:49 localhost python3[66923]: ansible-ansible.legacy.async_status Invoked with jid=399744111602.66628 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:18:51 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 03:18:51 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 03:18:51 localhost systemd[1]: Reloading. Nov 28 03:18:51 localhost systemd-sysv-generator[67013]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:51 localhost systemd-rc-local-generator[67008]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:51 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:51 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 03:18:52 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 03:18:52 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 03:18:52 localhost systemd[1]: man-db-cache-update.service: Consumed 1.417s CPU time. Nov 28 03:18:52 localhost systemd[1]: run-rde87af1a5567449f9804dd1d860a13de.service: Deactivated successfully. Nov 28 03:18:53 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created Nov 28 03:18:53 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}c5de38a09c7b13f562bf9286cb62fdd8d525e0c48ddd85d239ce86e29d135367' Nov 28 03:18:53 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd' Nov 28 03:18:53 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea' Nov 28 03:18:53 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97' Nov 28 03:18:53 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events Nov 28 03:18:54 localhost ansible-async_wrapper.py[66631]: 66632 still running (3590) Nov 28 03:18:58 localhost puppet-user[66652]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully Nov 28 03:18:58 localhost systemd[1]: Reloading. Nov 28 03:18:58 localhost systemd-rc-local-generator[68054]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:58 localhost systemd-sysv-generator[68060]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:58 localhost systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon.... Nov 28 03:18:58 localhost snmpd[68067]: Can't find directory of RPM packages Nov 28 03:18:59 localhost snmpd[68067]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB Nov 28 03:18:59 localhost systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon.. Nov 28 03:18:59 localhost systemd[1]: Reloading. Nov 28 03:18:59 localhost systemd-sysv-generator[68097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:59 localhost systemd-rc-local-generator[68090]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:59 localhost ansible-async_wrapper.py[66631]: 66632 still running (3585) Nov 28 03:18:59 localhost systemd[1]: Reloading. Nov 28 03:18:59 localhost systemd-rc-local-generator[68131]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:18:59 localhost systemd-sysv-generator[68134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:18:59 localhost sshd[68140]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:18:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:18:59 localhost puppet-user[66652]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running' Nov 28 03:18:59 localhost puppet-user[66652]: Notice: Applied catalog in 15.94 seconds Nov 28 03:18:59 localhost puppet-user[66652]: Application: Nov 28 03:18:59 localhost puppet-user[66652]: Initial environment: production Nov 28 03:18:59 localhost puppet-user[66652]: Converged environment: production Nov 28 03:18:59 localhost puppet-user[66652]: Run mode: user Nov 28 03:18:59 localhost puppet-user[66652]: Changes: Nov 28 03:18:59 localhost puppet-user[66652]: Total: 8 Nov 28 03:18:59 localhost puppet-user[66652]: Events: Nov 28 03:18:59 localhost puppet-user[66652]: Success: 8 Nov 28 03:18:59 localhost puppet-user[66652]: Total: 8 Nov 28 03:18:59 localhost puppet-user[66652]: Resources: Nov 28 03:18:59 localhost puppet-user[66652]: Restarted: 1 Nov 28 03:18:59 localhost puppet-user[66652]: Changed: 8 Nov 28 03:18:59 localhost puppet-user[66652]: Out of sync: 8 Nov 28 03:18:59 localhost puppet-user[66652]: Total: 19 Nov 28 03:18:59 localhost puppet-user[66652]: Time: Nov 28 03:18:59 localhost puppet-user[66652]: Filebucket: 0.00 Nov 28 03:18:59 localhost puppet-user[66652]: Schedule: 0.00 Nov 28 03:18:59 localhost puppet-user[66652]: Augeas: 0.01 Nov 28 03:18:59 localhost puppet-user[66652]: File: 0.11 Nov 28 03:18:59 localhost puppet-user[66652]: Config retrieval: 0.30 Nov 28 03:18:59 localhost puppet-user[66652]: Service: 1.27 Nov 28 03:18:59 localhost puppet-user[66652]: Transaction evaluation: 15.92 Nov 28 03:18:59 localhost puppet-user[66652]: Catalog application: 15.94 Nov 28 03:18:59 localhost puppet-user[66652]: Last run: 1764317939 Nov 28 03:18:59 localhost puppet-user[66652]: Exec: 5.10 Nov 28 03:18:59 localhost puppet-user[66652]: Package: 9.23 Nov 28 03:18:59 localhost puppet-user[66652]: Total: 15.94 Nov 28 03:18:59 localhost puppet-user[66652]: Version: Nov 28 03:18:59 localhost puppet-user[66652]: Config: 1764317923 Nov 28 03:18:59 localhost puppet-user[66652]: Puppet: 7.10.0 Nov 28 03:18:59 localhost ansible-async_wrapper.py[66632]: Module complete (66632) Nov 28 03:19:00 localhost python3[68157]: ansible-ansible.legacy.async_status Invoked with jid=399744111602.66628 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:19:00 localhost python3[68173]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:19:01 localhost python3[68189]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:01 localhost python3[68239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:02 localhost python3[68257]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpzfewyovu recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:19:02 localhost python3[68287]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:03 localhost python3[68470]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 28 03:19:04 localhost ansible-async_wrapper.py[66631]: Done in kid B. Nov 28 03:19:04 localhost python3[68553]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:05 localhost podman[68609]: Nov 28 03:19:05 localhost podman[68609]: 2025-11-28 08:19:05.073318687 +0000 UTC m=+0.082926365 container create 7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_pasteur, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, release=553, RELEASE=main, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph) Nov 28 03:19:05 localhost systemd[1]: Started libpod-conmon-7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b.scope. Nov 28 03:19:05 localhost systemd[1]: Started libcrun container. Nov 28 03:19:05 localhost podman[68609]: 2025-11-28 08:19:05.041975651 +0000 UTC m=+0.051583379 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 03:19:05 localhost podman[68609]: 2025-11-28 08:19:05.15751051 +0000 UTC m=+0.167118238 container init 7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_pasteur, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, name=rhceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True) Nov 28 03:19:05 localhost systemd[1]: tmp-crun.1RurvL.mount: Deactivated successfully. Nov 28 03:19:05 localhost podman[68609]: 2025-11-28 08:19:05.17243292 +0000 UTC m=+0.182040608 container start 7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_pasteur, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, ceph=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Nov 28 03:19:05 localhost podman[68609]: 2025-11-28 08:19:05.172732059 +0000 UTC m=+0.182339837 container attach 7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_pasteur, description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph) Nov 28 03:19:05 localhost systemd[1]: libpod-7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b.scope: Deactivated successfully. Nov 28 03:19:05 localhost strange_pasteur[68624]: 167 167 Nov 28 03:19:05 localhost podman[68609]: 2025-11-28 08:19:05.178562808 +0000 UTC m=+0.188170516 container died 7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_pasteur, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, ceph=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 03:19:05 localhost podman[68630]: 2025-11-28 08:19:05.285015207 +0000 UTC m=+0.095053649 container remove 7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_pasteur, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, architecture=x86_64, version=7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main) Nov 28 03:19:05 localhost systemd[1]: libpod-conmon-7344aa13f74d173a9771e50a99178889ef349c030015ef0bbe7acd133d46c48b.scope: Deactivated successfully. Nov 28 03:19:05 localhost podman[68664]: Nov 28 03:19:05 localhost podman[68664]: 2025-11-28 08:19:05.515561717 +0000 UTC m=+0.080092798 container create 47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, io.buildah.version=1.33.12, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, RELEASE=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph) Nov 28 03:19:05 localhost systemd[1]: Started libpod-conmon-47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f.scope. Nov 28 03:19:05 localhost systemd[1]: Started libcrun container. Nov 28 03:19:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f4f1aa5f31845c4962ceebcd671f3caafeb2f2d76f906720e6f31fba9991c/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f4f1aa5f31845c4962ceebcd671f3caafeb2f2d76f906720e6f31fba9991c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e52f4f1aa5f31845c4962ceebcd671f3caafeb2f2d76f906720e6f31fba9991c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:05 localhost podman[68664]: 2025-11-28 08:19:05.482724756 +0000 UTC m=+0.047255847 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 03:19:05 localhost podman[68664]: 2025-11-28 08:19:05.582017044 +0000 UTC m=+0.146548125 container init 47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.openshift.expose-services=, name=rhceph, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, RELEASE=main, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, release=553) Nov 28 03:19:05 localhost python3[68666]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:05 localhost podman[68664]: 2025-11-28 08:19:05.594984773 +0000 UTC m=+0.159515824 container start 47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55) Nov 28 03:19:05 localhost podman[68664]: 2025-11-28 08:19:05.603600438 +0000 UTC m=+0.168131479 container attach 47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=553) Nov 28 03:19:06 localhost systemd[1]: var-lib-containers-storage-overlay-33a6bdafd4609c0f505a3833915d396dc809123f2eb6bfcea64c83bbe3b2e0f7-merged.mount: Deactivated successfully. Nov 28 03:19:06 localhost python3[68746]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:06 localhost python3[69298]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:06 localhost admiring_ride[68680]: [ Nov 28 03:19:06 localhost admiring_ride[68680]: { Nov 28 03:19:06 localhost admiring_ride[68680]: "available": false, Nov 28 03:19:06 localhost admiring_ride[68680]: "ceph_device": false, Nov 28 03:19:06 localhost admiring_ride[68680]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 28 03:19:06 localhost admiring_ride[68680]: "lsm_data": {}, Nov 28 03:19:06 localhost admiring_ride[68680]: "lvs": [], Nov 28 03:19:06 localhost admiring_ride[68680]: "path": "/dev/sr0", Nov 28 03:19:06 localhost admiring_ride[68680]: "rejected_reasons": [ Nov 28 03:19:06 localhost admiring_ride[68680]: "Insufficient space (<5GB)", Nov 28 03:19:06 localhost admiring_ride[68680]: "Has a FileSystem" Nov 28 03:19:06 localhost admiring_ride[68680]: ], Nov 28 03:19:06 localhost admiring_ride[68680]: "sys_api": { Nov 28 03:19:06 localhost admiring_ride[68680]: "actuators": null, Nov 28 03:19:06 localhost admiring_ride[68680]: "device_nodes": "sr0", Nov 28 03:19:06 localhost admiring_ride[68680]: "human_readable_size": "482.00 KB", Nov 28 03:19:06 localhost admiring_ride[68680]: "id_bus": "ata", Nov 28 03:19:06 localhost admiring_ride[68680]: "model": "QEMU DVD-ROM", Nov 28 03:19:06 localhost admiring_ride[68680]: "nr_requests": "2", Nov 28 03:19:06 localhost admiring_ride[68680]: "partitions": {}, Nov 28 03:19:06 localhost admiring_ride[68680]: "path": "/dev/sr0", Nov 28 03:19:06 localhost admiring_ride[68680]: "removable": "1", Nov 28 03:19:06 localhost admiring_ride[68680]: "rev": "2.5+", Nov 28 03:19:06 localhost admiring_ride[68680]: "ro": "0", Nov 28 03:19:06 localhost admiring_ride[68680]: "rotational": "1", Nov 28 03:19:06 localhost admiring_ride[68680]: "sas_address": "", Nov 28 03:19:06 localhost admiring_ride[68680]: "sas_device_handle": "", Nov 28 03:19:06 localhost admiring_ride[68680]: "scheduler_mode": "mq-deadline", Nov 28 03:19:06 localhost admiring_ride[68680]: "sectors": 0, Nov 28 03:19:06 localhost admiring_ride[68680]: "sectorsize": "2048", Nov 28 03:19:06 localhost admiring_ride[68680]: "size": 493568.0, Nov 28 03:19:06 localhost admiring_ride[68680]: "support_discard": "0", Nov 28 03:19:06 localhost admiring_ride[68680]: "type": "disk", Nov 28 03:19:06 localhost admiring_ride[68680]: "vendor": "QEMU" Nov 28 03:19:06 localhost admiring_ride[68680]: } Nov 28 03:19:06 localhost admiring_ride[68680]: } Nov 28 03:19:06 localhost admiring_ride[68680]: ] Nov 28 03:19:06 localhost systemd[1]: libpod-47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f.scope: Deactivated successfully. Nov 28 03:19:06 localhost systemd[1]: libpod-47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f.scope: Consumed 1.017s CPU time. Nov 28 03:19:06 localhost podman[68664]: 2025-11-28 08:19:06.610467807 +0000 UTC m=+1.174998938 container died 47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, version=7, GIT_CLEAN=True, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-09-24T08:57:55, name=rhceph, CEPH_POINT_RELEASE=) Nov 28 03:19:06 localhost systemd[1]: var-lib-containers-storage-overlay-e52f4f1aa5f31845c4962ceebcd671f3caafeb2f2d76f906720e6f31fba9991c-merged.mount: Deactivated successfully. Nov 28 03:19:06 localhost podman[70331]: 2025-11-28 08:19:06.711395736 +0000 UTC m=+0.090301522 container remove 47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_ride, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 03:19:06 localhost systemd[1]: libpod-conmon-47e37f8bdaafc8a45e0942c4c591c464471070f5960643a9b2caf38c7ebbc11f.scope: Deactivated successfully. Nov 28 03:19:07 localhost python3[70393]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:07 localhost python3[70411]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:07 localhost python3[70488]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:08 localhost python3[70506]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:08 localhost python3[70569]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:09 localhost python3[70587]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:09 localhost python3[70617]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:09 localhost systemd[1]: Reloading. Nov 28 03:19:09 localhost systemd-rc-local-generator[70642]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:09 localhost systemd-sysv-generator[70647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:10 localhost python3[70702]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:10 localhost python3[70720]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:11 localhost python3[70782]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:19:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:19:11 localhost podman[70801]: 2025-11-28 08:19:11.637593232 +0000 UTC m=+0.106251943 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible) Nov 28 03:19:11 localhost python3[70800]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:11 localhost podman[70801]: 2025-11-28 08:19:11.857536235 +0000 UTC m=+0.326194916 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Nov 28 03:19:11 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:19:12 localhost python3[70862]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:12 localhost systemd[1]: Reloading. Nov 28 03:19:12 localhost systemd-sysv-generator[70886]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:12 localhost systemd-rc-local-generator[70882]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:19:12 localhost systemd[1]: Starting Create netns directory... Nov 28 03:19:12 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 03:19:12 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 03:19:12 localhost systemd[1]: Finished Create netns directory. Nov 28 03:19:12 localhost systemd[1]: tmp-crun.uXNoWY.mount: Deactivated successfully. Nov 28 03:19:12 localhost podman[70899]: 2025-11-28 08:19:12.606713569 +0000 UTC m=+0.098361930 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd) Nov 28 03:19:12 localhost podman[70899]: 2025-11-28 08:19:12.645582036 +0000 UTC m=+0.137230347 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:19:12 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:19:13 localhost python3[70939]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:19:15 localhost podman[70999]: 2025-11-28 08:19:15.214254785 +0000 UTC m=+0.081963976 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, container_name=iscsid, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container) Nov 28 03:19:15 localhost podman[70999]: 2025-11-28 08:19:15.230424403 +0000 UTC m=+0.098133594 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3) Nov 28 03:19:15 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:19:15 localhost python3[70998]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 28 03:19:15 localhost podman[71174]: 2025-11-28 08:19:15.57027093 +0000 UTC m=+0.058107071 container create 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 28 03:19:15 localhost podman[71179]: 2025-11-28 08:19:15.597800967 +0000 UTC m=+0.082636156 container create d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:19:15 localhost podman[71202]: 2025-11-28 08:19:15.619095633 +0000 UTC m=+0.076940530 container create 54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=configure_cms_options, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Nov 28 03:19:15 localhost systemd[1]: Started libpod-conmon-d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.scope. Nov 28 03:19:15 localhost podman[71174]: 2025-11-28 08:19:15.539834482 +0000 UTC m=+0.027670633 image pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 28 03:19:15 localhost podman[71179]: 2025-11-28 08:19:15.543558767 +0000 UTC m=+0.028393956 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Nov 28 03:19:15 localhost podman[71226]: 2025-11-28 08:19:15.654576066 +0000 UTC m=+0.096583706 container create f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_libvirt_init_secret, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:35:22Z, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}) Nov 28 03:19:15 localhost podman[71212]: 2025-11-28 08:19:15.666244085 +0000 UTC m=+0.112358401 container create 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 28 03:19:15 localhost podman[71202]: 2025-11-28 08:19:15.568925758 +0000 UTC m=+0.026770685 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 28 03:19:15 localhost systemd[1]: Started libpod-conmon-54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c.scope. Nov 28 03:19:15 localhost systemd[1]: Started libcrun container. Nov 28 03:19:15 localhost systemd[1]: Started libpod-conmon-719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.scope. Nov 28 03:19:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d613e9ce43651b4a22ba11f5bcafcb4dcc9b302834037925dc9c415fac8e707f/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:15 localhost systemd[1]: Started libpod-conmon-f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d.scope. Nov 28 03:19:15 localhost systemd[1]: Started libcrun container. Nov 28 03:19:15 localhost podman[71226]: 2025-11-28 08:19:15.589004046 +0000 UTC m=+0.031011676 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 Nov 28 03:19:15 localhost systemd[1]: Started libcrun container. Nov 28 03:19:15 localhost systemd[1]: Started libcrun container. Nov 28 03:19:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93bf0314bbd4063198be021c760bb47b8172c6cfa3163da2b90a6f202605824f/merged/var/log/containers supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740c106cec230c8e88a51062b1e59b5e7fe0e9195732430d85360787d0335118/merged/etc/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740c106cec230c8e88a51062b1e59b5e7fe0e9195732430d85360787d0335118/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740c106cec230c8e88a51062b1e59b5e7fe0e9195732430d85360787d0335118/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:15 localhost systemd[1]: Started libpod-conmon-7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.scope. Nov 28 03:19:15 localhost podman[71202]: 2025-11-28 08:19:15.702975256 +0000 UTC m=+0.160820163 container init 54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=configure_cms_options) Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:19:15 localhost podman[71179]: 2025-11-28 08:19:15.707732333 +0000 UTC m=+0.192567512 container init d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_compute, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:19:15 localhost podman[71202]: 2025-11-28 08:19:15.711629553 +0000 UTC m=+0.169474470 container start 54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=configure_cms_options, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:19:15 localhost podman[71202]: 2025-11-28 08:19:15.711825759 +0000 UTC m=+0.169670656 container attach 54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=configure_cms_options, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 03:19:15 localhost podman[71212]: 2025-11-28 08:19:15.618088793 +0000 UTC m=+0.064203129 image pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 28 03:19:15 localhost systemd[1]: Started libcrun container. Nov 28 03:19:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2363e42c8cc93f560c242c278a1b76f810df60301763e880790aefc5b17b52f/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:19:15 localhost podman[71179]: 2025-11-28 08:19:15.738699967 +0000 UTC m=+0.223535186 container start d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public) Nov 28 03:19:15 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=185ba876a5902dbf87b8591344afd39d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:19:15 localhost podman[71174]: 2025-11-28 08:19:15.767852754 +0000 UTC m=+0.255688895 container init 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:19:15 localhost ovs-vsctl[71310]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options Nov 28 03:19:15 localhost systemd[1]: libpod-54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c.scope: Deactivated successfully. Nov 28 03:19:15 localhost podman[71202]: 2025-11-28 08:19:15.813194131 +0000 UTC m=+0.271039048 container died 54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=configure_cms_options, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:19:15 localhost podman[71174]: 2025-11-28 08:19:15.837473609 +0000 UTC m=+0.325309750 container start 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Nov 28 03:19:15 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 Nov 28 03:19:15 localhost podman[71226]: 2025-11-28 08:19:15.858136605 +0000 UTC m=+0.300144235 container init f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_libvirt_init_secret, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:19:15 localhost podman[71212]: 2025-11-28 08:19:15.867615467 +0000 UTC m=+0.313729883 container init 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 28 03:19:15 localhost podman[71226]: 2025-11-28 08:19:15.8680266 +0000 UTC m=+0.310034220 container start f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, vendor=Red Hat, Inc., container_name=nova_libvirt_init_secret, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 28 03:19:15 localhost podman[71226]: 2025-11-28 08:19:15.868215746 +0000 UTC m=+0.310223396 container attach f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Nov 28 03:19:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:19:15 localhost podman[71288]: 2025-11-28 08:19:15.938959964 +0000 UTC m=+0.197253375 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, version=17.1.12) Nov 28 03:19:15 localhost podman[71288]: 2025-11-28 08:19:15.955301638 +0000 UTC m=+0.213595049 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:19:15 localhost podman[71288]: unhealthy Nov 28 03:19:15 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:19:15 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Failed with result 'exit-code'. Nov 28 03:19:15 localhost systemd[1]: libpod-f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d.scope: Deactivated successfully. Nov 28 03:19:15 localhost podman[71226]: 2025-11-28 08:19:15.978317466 +0000 UTC m=+0.420325096 container died f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_libvirt_init_secret, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:35:22Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Nov 28 03:19:16 localhost podman[71315]: 2025-11-28 08:19:16.00470754 +0000 UTC m=+0.185318830 container cleanup 54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=configure_cms_options, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}) Nov 28 03:19:16 localhost systemd[1]: libpod-conmon-54ecd038a8454111a00491ca2068a3bc5f289d100154e1428ef79ea1bdd09f9c.scope: Deactivated successfully. Nov 28 03:19:16 localhost podman[71326]: 2025-11-28 08:19:15.908471115 +0000 UTC m=+0.067636604 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:19:16 localhost podman[71416]: 2025-11-28 08:19:16.025172149 +0000 UTC m=+0.044427409 container cleanup f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, container_name=nova_libvirt_init_secret, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 03:19:16 localhost systemd[1]: libpod-conmon-f55e5efc588f54c78f3bd9849c02f7371bb35fe8b0ebb7a16930501c2e5d968d.scope: Deactivated successfully. Nov 28 03:19:16 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack Nov 28 03:19:16 localhost podman[71326]: 2025-11-28 08:19:16.040293075 +0000 UTC m=+0.199458574 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 28 03:19:16 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:19:16 localhost podman[71212]: 2025-11-28 08:19:16.062965063 +0000 UTC m=+0.509079369 container start 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1) Nov 28 03:19:16 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764316155 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml); if [ X"$CMS_OPTS" != X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi Nov 28 03:19:16 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=185ba876a5902dbf87b8591344afd39d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 Nov 28 03:19:16 localhost podman[71379]: 2025-11-28 08:19:16.079177042 +0000 UTC m=+0.167515960 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Nov 28 03:19:16 localhost podman[71379]: 2025-11-28 08:19:16.088311003 +0000 UTC m=+0.176649961 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:19:16 localhost podman[71379]: unhealthy Nov 28 03:19:16 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:19:16 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 03:19:16 localhost podman[71537]: 2025-11-28 08:19:16.266904634 +0000 UTC m=+0.067943143 container create e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:19:16 localhost systemd[1]: Started libpod-conmon-e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.scope. Nov 28 03:19:16 localhost systemd[1]: Started libcrun container. Nov 28 03:19:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3850d276a9594c52a78e85d7b58db016dc835caf89f3a263b0f9d37a3754a60d/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:16 localhost podman[71537]: 2025-11-28 08:19:16.22391345 +0000 UTC m=+0.024951959 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:19:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:19:16 localhost podman[71537]: 2025-11-28 08:19:16.335022443 +0000 UTC m=+0.136060982 container init e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:19:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:19:16 localhost podman[71537]: 2025-11-28 08:19:16.360200348 +0000 UTC m=+0.161238857 container start e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z) Nov 28 03:19:16 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:19:16 localhost podman[71593]: 2025-11-28 08:19:16.431438692 +0000 UTC m=+0.068315345 container create dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=setup_ovs_manager, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:19:16 localhost systemd[1]: Started libpod-conmon-dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c.scope. Nov 28 03:19:16 localhost podman[71592]: 2025-11-28 08:19:16.471850286 +0000 UTC m=+0.105866341 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:19:16 localhost systemd[1]: Started libcrun container. Nov 28 03:19:16 localhost podman[71593]: 2025-11-28 08:19:16.491256273 +0000 UTC m=+0.128132916 container init dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=setup_ovs_manager, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64) Nov 28 03:19:16 localhost podman[71593]: 2025-11-28 08:19:16.392608906 +0000 UTC m=+0.029485579 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 28 03:19:16 localhost podman[71593]: 2025-11-28 08:19:16.505454341 +0000 UTC m=+0.142330984 container start dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=setup_ovs_manager, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:19:16 localhost podman[71593]: 2025-11-28 08:19:16.506474963 +0000 UTC m=+0.143351616 container attach dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.openshift.expose-services=, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}) Nov 28 03:19:16 localhost podman[71592]: 2025-11-28 08:19:16.82622323 +0000 UTC m=+0.460239375 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git) Nov 28 03:19:16 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:19:17 localhost kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure Nov 28 03:19:19 localhost ovs-vsctl[71787]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager Nov 28 03:19:19 localhost systemd[1]: libpod-dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c.scope: Deactivated successfully. Nov 28 03:19:19 localhost systemd[1]: libpod-dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c.scope: Consumed 2.938s CPU time. Nov 28 03:19:19 localhost podman[71788]: 2025-11-28 08:19:19.556753444 +0000 UTC m=+0.054531331 container died dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=setup_ovs_manager, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public) Nov 28 03:19:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c-userdata-shm.mount: Deactivated successfully. Nov 28 03:19:19 localhost systemd[1]: var-lib-containers-storage-overlay-a820d2d170f2b0bbf4a680f8c0da82218646a321cb82318df9e6b8161dc1d2c6-merged.mount: Deactivated successfully. Nov 28 03:19:19 localhost podman[71788]: 2025-11-28 08:19:19.594482236 +0000 UTC m=+0.092260053 container cleanup dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, container_name=setup_ovs_manager, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 28 03:19:19 localhost systemd[1]: libpod-conmon-dffa592367a31755f9497a289202797714e3584bcc35fa55351b1c65c9a8d73c.scope: Deactivated successfully. Nov 28 03:19:19 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764316155 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764316155'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata Nov 28 03:19:20 localhost podman[71895]: 2025-11-28 08:19:20.041444862 +0000 UTC m=+0.087215098 container create e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:19:20 localhost podman[71896]: 2025-11-28 08:19:20.074984685 +0000 UTC m=+0.115037074 container create 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1) Nov 28 03:19:20 localhost systemd[1]: Started libpod-conmon-e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.scope. Nov 28 03:19:20 localhost podman[71895]: 2025-11-28 08:19:19.992623088 +0000 UTC m=+0.038393354 image pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 28 03:19:20 localhost systemd[1]: Started libpod-conmon-9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.scope. Nov 28 03:19:20 localhost systemd[1]: Started libcrun container. Nov 28 03:19:20 localhost podman[71896]: 2025-11-28 08:19:20.008151626 +0000 UTC m=+0.048204065 image pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 28 03:19:20 localhost systemd[1]: Started libcrun container. Nov 28 03:19:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bc8e2b5666799e64c84f093eb3569ddc3bccd8602a09788ea75d9b81e61916/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22314ee7dcc5723035b6772f98d17adedfb1f7b03c71f0801082e550913dd450/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22314ee7dcc5723035b6772f98d17adedfb1f7b03c71f0801082e550913dd450/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22314ee7dcc5723035b6772f98d17adedfb1f7b03c71f0801082e550913dd450/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bc8e2b5666799e64c84f093eb3569ddc3bccd8602a09788ea75d9b81e61916/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6bc8e2b5666799e64c84f093eb3569ddc3bccd8602a09788ea75d9b81e61916/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff) Nov 28 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:19:20 localhost podman[71896]: 2025-11-28 08:19:20.135472437 +0000 UTC m=+0.175524816 container init 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=) Nov 28 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:19:20 localhost podman[71895]: 2025-11-28 08:19:20.138800429 +0000 UTC m=+0.184570675 container init e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12) Nov 28 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:19:20 localhost podman[71896]: 2025-11-28 08:19:20.163979665 +0000 UTC m=+0.204032024 container start 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, tcib_managed=true) Nov 28 03:19:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:19:20 localhost podman[71895]: 2025-11-28 08:19:20.166579435 +0000 UTC m=+0.212349671 container start e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:19:20 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 Nov 28 03:19:20 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:19:20 localhost python3[70998]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=08c21dad54d1ba598c6e2fae6b853aba --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 Nov 28 03:19:20 localhost systemd[1]: Created slice User Slice of UID 0. Nov 28 03:19:20 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 28 03:19:20 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 28 03:19:20 localhost systemd[1]: Starting User Manager for UID 0... Nov 28 03:19:20 localhost podman[71941]: 2025-11-28 08:19:20.239696467 +0000 UTC m=+0.069621275 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:19:20 localhost podman[71942]: 2025-11-28 08:19:20.299664594 +0000 UTC m=+0.125677281 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, distribution-scope=public, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:19:20 localhost systemd[71972]: Queued start job for default target Main User Target. Nov 28 03:19:20 localhost systemd[71972]: Created slice User Application Slice. Nov 28 03:19:20 localhost systemd[71972]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 28 03:19:20 localhost systemd[71972]: Started Daily Cleanup of User's Temporary Directories. Nov 28 03:19:20 localhost systemd[71972]: Reached target Paths. Nov 28 03:19:20 localhost systemd[71972]: Reached target Timers. Nov 28 03:19:20 localhost podman[71941]: 2025-11-28 08:19:20.330907526 +0000 UTC m=+0.160832354 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4) Nov 28 03:19:20 localhost systemd[71972]: Starting D-Bus User Message Bus Socket... Nov 28 03:19:20 localhost systemd[71972]: Starting Create User's Volatile Files and Directories... Nov 28 03:19:20 localhost podman[71941]: unhealthy Nov 28 03:19:20 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:19:20 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:19:20 localhost podman[71942]: 2025-11-28 08:19:20.344441617 +0000 UTC m=+0.170454304 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:19:20 localhost systemd[71972]: Listening on D-Bus User Message Bus Socket. Nov 28 03:19:20 localhost systemd[71972]: Reached target Sockets. Nov 28 03:19:20 localhost systemd[71972]: Finished Create User's Volatile Files and Directories. Nov 28 03:19:20 localhost systemd[71972]: Reached target Basic System. Nov 28 03:19:20 localhost systemd[71972]: Reached target Main User Target. Nov 28 03:19:20 localhost systemd[71972]: Startup finished in 112ms. Nov 28 03:19:20 localhost systemd[1]: Started User Manager for UID 0. Nov 28 03:19:20 localhost podman[71942]: unhealthy Nov 28 03:19:20 localhost systemd[1]: Started Session c9 of User root. Nov 28 03:19:20 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:19:20 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:19:20 localhost systemd[1]: session-c9.scope: Deactivated successfully. Nov 28 03:19:20 localhost kernel: device br-int entered promiscuous mode Nov 28 03:19:20 localhost NetworkManager[5965]: [1764317960.4412] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11) Nov 28 03:19:20 localhost systemd-udevd[72045]: Network interface NamePolicy= disabled on kernel command line. Nov 28 03:19:20 localhost systemd-udevd[72047]: Network interface NamePolicy= disabled on kernel command line. Nov 28 03:19:20 localhost NetworkManager[5965]: [1764317960.5141] device (genev_sys_6081): carrier: link connected Nov 28 03:19:20 localhost NetworkManager[5965]: [1764317960.5145] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12) Nov 28 03:19:20 localhost kernel: device genev_sys_6081 entered promiscuous mode Nov 28 03:19:20 localhost python3[72067]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:21 localhost python3[72083]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:21 localhost python3[72099]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:21 localhost python3[72115]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:21 localhost python3[72131]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:22 localhost python3[72150]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:22 localhost python3[72167]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:22 localhost python3[72184]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:22 localhost python3[72201]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:23 localhost python3[72219]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:23 localhost python3[72235]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:23 localhost python3[72251]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:19:24 localhost python3[72312]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317963.7044015-110016-123672381347547/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:24 localhost python3[72341]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317963.7044015-110016-123672381347547/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:25 localhost python3[72370]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317963.7044015-110016-123672381347547/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:25 localhost python3[72399]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317963.7044015-110016-123672381347547/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:26 localhost python3[72428]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317963.7044015-110016-123672381347547/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:26 localhost python3[72457]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764317963.7044015-110016-123672381347547/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:27 localhost python3[72473]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 03:19:27 localhost systemd[1]: Reloading. Nov 28 03:19:27 localhost systemd-sysv-generator[72504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:27 localhost systemd-rc-local-generator[72498]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:28 localhost python3[72525]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:28 localhost systemd[1]: Reloading. Nov 28 03:19:28 localhost systemd-rc-local-generator[72550]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:28 localhost systemd-sysv-generator[72556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:28 localhost systemd[1]: Starting ceilometer_agent_compute container... Nov 28 03:19:28 localhost tripleo-start-podman-container[72565]: Creating additional drop-in dependency for "ceilometer_agent_compute" (d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff) Nov 28 03:19:28 localhost systemd[1]: Reloading. Nov 28 03:19:28 localhost systemd-rc-local-generator[72621]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:28 localhost systemd-sysv-generator[72626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:29 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:29 localhost systemd[1]: Started ceilometer_agent_compute container. Nov 28 03:19:29 localhost python3[72650]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:30 localhost systemd[1]: Stopping User Manager for UID 0... Nov 28 03:19:30 localhost systemd[71972]: Activating special unit Exit the Session... Nov 28 03:19:30 localhost systemd[71972]: Stopped target Main User Target. Nov 28 03:19:30 localhost systemd[71972]: Stopped target Basic System. Nov 28 03:19:30 localhost systemd[71972]: Stopped target Paths. Nov 28 03:19:30 localhost systemd[71972]: Stopped target Sockets. Nov 28 03:19:30 localhost systemd[71972]: Stopped target Timers. Nov 28 03:19:30 localhost systemd[71972]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 03:19:30 localhost systemd[71972]: Closed D-Bus User Message Bus Socket. Nov 28 03:19:30 localhost systemd[71972]: Stopped Create User's Volatile Files and Directories. Nov 28 03:19:30 localhost systemd[71972]: Removed slice User Application Slice. Nov 28 03:19:30 localhost systemd[71972]: Reached target Shutdown. Nov 28 03:19:30 localhost systemd[71972]: Finished Exit the Session. Nov 28 03:19:30 localhost systemd[71972]: Reached target Exit the Session. Nov 28 03:19:30 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 28 03:19:30 localhost systemd[1]: Stopped User Manager for UID 0. Nov 28 03:19:30 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 28 03:19:30 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 28 03:19:30 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 28 03:19:30 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 28 03:19:30 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 28 03:19:30 localhost systemd[1]: Reloading. Nov 28 03:19:30 localhost systemd-rc-local-generator[72676]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:30 localhost systemd-sysv-generator[72681]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:31 localhost systemd[1]: Starting ceilometer_agent_ipmi container... Nov 28 03:19:31 localhost systemd[1]: Started ceilometer_agent_ipmi container. Nov 28 03:19:31 localhost python3[72718]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:32 localhost systemd[1]: Reloading. Nov 28 03:19:33 localhost systemd-sysv-generator[72747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:33 localhost systemd-rc-local-generator[72743]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:33 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:33 localhost systemd[1]: Starting logrotate_crond container... Nov 28 03:19:33 localhost systemd[1]: Started logrotate_crond container. Nov 28 03:19:34 localhost python3[72786]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:34 localhost systemd[1]: Reloading. Nov 28 03:19:34 localhost systemd-rc-local-generator[72813]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:34 localhost systemd-sysv-generator[72818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:34 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:34 localhost systemd[1]: Starting nova_migration_target container... Nov 28 03:19:34 localhost systemd[1]: Started nova_migration_target container. Nov 28 03:19:35 localhost python3[72853]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:35 localhost systemd[1]: Reloading. Nov 28 03:19:35 localhost systemd-sysv-generator[72884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:35 localhost systemd-rc-local-generator[72881]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:35 localhost systemd[1]: Starting ovn_controller container... Nov 28 03:19:35 localhost tripleo-start-podman-container[72893]: Creating additional drop-in dependency for "ovn_controller" (9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164) Nov 28 03:19:35 localhost systemd[1]: Reloading. Nov 28 03:19:36 localhost systemd-rc-local-generator[72948]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:36 localhost systemd-sysv-generator[72953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:36 localhost systemd[1]: Started ovn_controller container. Nov 28 03:19:36 localhost python3[72976]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:19:37 localhost systemd[1]: Reloading. Nov 28 03:19:37 localhost systemd-rc-local-generator[73004]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:19:37 localhost systemd-sysv-generator[73008]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:19:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:19:37 localhost systemd[1]: Starting ovn_metadata_agent container... Nov 28 03:19:37 localhost systemd[1]: Started ovn_metadata_agent container. Nov 28 03:19:37 localhost python3[73056]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:39 localhost python3[73178]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005538515 step=4 update_config_hash_only=False Nov 28 03:19:39 localhost python3[73194]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:19:40 localhost python3[73210]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 28 03:19:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:19:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:19:42 localhost podman[73212]: 2025-11-28 08:19:42.996561718 +0000 UTC m=+0.100726218 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vcs-type=git) Nov 28 03:19:43 localhost podman[73213]: 2025-11-28 08:19:43.047695861 +0000 UTC m=+0.149212209 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, managed_by=tripleo_ansible, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com) Nov 28 03:19:43 localhost podman[73213]: 2025-11-28 08:19:43.065742363 +0000 UTC m=+0.167258741 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, container_name=collectd) Nov 28 03:19:43 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:19:43 localhost podman[73212]: 2025-11-28 08:19:43.190937313 +0000 UTC m=+0.295101773 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 28 03:19:43 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:19:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:19:46 localhost podman[73266]: 2025-11-28 08:19:46.000850702 +0000 UTC m=+0.100440710 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:19:46 localhost podman[73266]: 2025-11-28 08:19:46.040565009 +0000 UTC m=+0.140154987 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, architecture=x86_64, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:19:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:19:46 localhost podman[73284]: 2025-11-28 08:19:46.098260077 +0000 UTC m=+0.086235868 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:19:46 localhost podman[73284]: 2025-11-28 08:19:46.139473321 +0000 UTC m=+0.127449102 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, vcs-type=git, build-date=2025-11-19T00:11:48Z, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:19:46 localhost podman[73297]: 2025-11-28 08:19:46.186846987 +0000 UTC m=+0.095876379 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4) Nov 28 03:19:46 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:19:46 localhost podman[73297]: 2025-11-28 08:19:46.246737692 +0000 UTC m=+0.155767084 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:19:46 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:19:46 localhost podman[73324]: 2025-11-28 08:19:46.261418209 +0000 UTC m=+0.092752950 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:19:46 localhost podman[73324]: 2025-11-28 08:19:46.29450791 +0000 UTC m=+0.125842681 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Nov 28 03:19:46 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:19:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:19:46 localhost podman[73358]: 2025-11-28 08:19:46.974605166 +0000 UTC m=+0.084944788 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:19:46 localhost systemd[1]: tmp-crun.hyy27G.mount: Deactivated successfully. Nov 28 03:19:47 localhost podman[73358]: 2025-11-28 08:19:47.335308452 +0000 UTC m=+0.445648044 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:19:47 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:19:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:19:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:19:50 localhost systemd[1]: tmp-crun.92I7Ag.mount: Deactivated successfully. Nov 28 03:19:50 localhost podman[73379]: 2025-11-28 08:19:50.980310526 +0000 UTC m=+0.089026255 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64) Nov 28 03:19:51 localhost podman[73380]: 2025-11-28 08:19:51.026088561 +0000 UTC m=+0.131086344 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:19:51 localhost podman[73379]: 2025-11-28 08:19:51.051986728 +0000 UTC m=+0.160702517 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Nov 28 03:19:51 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:19:51 localhost podman[73380]: 2025-11-28 08:19:51.097588699 +0000 UTC m=+0.202586502 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:19:51 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:19:59 localhost snmpd[68067]: empty variable list in _query Nov 28 03:19:59 localhost snmpd[68067]: empty variable list in _query Nov 28 03:20:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:20:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:20:13 localhost systemd[1]: tmp-crun.3kG97C.mount: Deactivated successfully. Nov 28 03:20:13 localhost podman[73505]: 2025-11-28 08:20:13.995231378 +0000 UTC m=+0.097983613 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-collectd, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container) Nov 28 03:20:14 localhost podman[73504]: 2025-11-28 08:20:14.037704041 +0000 UTC m=+0.141655553 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1) Nov 28 03:20:14 localhost podman[73505]: 2025-11-28 08:20:14.082592209 +0000 UTC m=+0.185344404 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git) Nov 28 03:20:14 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:20:14 localhost podman[73504]: 2025-11-28 08:20:14.230715424 +0000 UTC m=+0.334666986 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.openshift.expose-services=) Nov 28 03:20:14 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:20:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:20:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:20:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:20:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:20:16 localhost podman[73554]: 2025-11-28 08:20:16.982646148 +0000 UTC m=+0.089808449 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:20:17 localhost podman[73554]: 2025-11-28 08:20:17.012749065 +0000 UTC m=+0.119911336 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:20:17 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:20:17 localhost systemd[1]: tmp-crun.R4pZWB.mount: Deactivated successfully. Nov 28 03:20:17 localhost podman[73553]: 2025-11-28 08:20:17.088493534 +0000 UTC m=+0.196675137 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true) Nov 28 03:20:17 localhost podman[73553]: 2025-11-28 08:20:17.098509596 +0000 UTC m=+0.206691189 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:20:17 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:20:17 localhost systemd[1]: tmp-crun.DGAVgo.mount: Deactivated successfully. Nov 28 03:20:17 localhost podman[73556]: 2025-11-28 08:20:17.142945761 +0000 UTC m=+0.245241401 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:11:48Z) Nov 28 03:20:17 localhost podman[73555]: 2025-11-28 08:20:17.198676147 +0000 UTC m=+0.300821862 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3) Nov 28 03:20:17 localhost podman[73556]: 2025-11-28 08:20:17.222854419 +0000 UTC m=+0.325150099 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, managed_by=tripleo_ansible) Nov 28 03:20:17 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:20:17 localhost podman[73555]: 2025-11-28 08:20:17.236859856 +0000 UTC m=+0.339005621 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, version=17.1.12, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:20:17 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:20:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:20:17 localhost podman[73642]: 2025-11-28 08:20:17.965633418 +0000 UTC m=+0.076509645 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target) Nov 28 03:20:18 localhost podman[73642]: 2025-11-28 08:20:18.327630264 +0000 UTC m=+0.438506501 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.) Nov 28 03:20:18 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:20:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:20:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:20:21 localhost systemd[1]: tmp-crun.9aKQG3.mount: Deactivated successfully. Nov 28 03:20:21 localhost podman[73667]: 2025-11-28 08:20:21.95701728 +0000 UTC m=+0.065172950 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, io.openshift.expose-services=) Nov 28 03:20:22 localhost podman[73666]: 2025-11-28 08:20:22.018177016 +0000 UTC m=+0.126117860 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, release=1761123044, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z) Nov 28 03:20:22 localhost podman[73666]: 2025-11-28 08:20:22.041331447 +0000 UTC m=+0.149272261 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 03:20:22 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:20:22 localhost podman[73667]: 2025-11-28 08:20:22.071195587 +0000 UTC m=+0.179351327 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12) Nov 28 03:20:22 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:20:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:20:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:20:44 localhost podman[73715]: 2025-11-28 08:20:44.974780843 +0000 UTC m=+0.085721722 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:20:45 localhost podman[73716]: 2025-11-28 08:20:45.038812418 +0000 UTC m=+0.141392066 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, architecture=x86_64, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:20:45 localhost podman[73716]: 2025-11-28 08:20:45.047869609 +0000 UTC m=+0.150449247 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12) Nov 28 03:20:45 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:20:45 localhost podman[73715]: 2025-11-28 08:20:45.1785279 +0000 UTC m=+0.289468759 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, config_id=tripleo_step1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:20:45 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:20:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:20:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:20:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:20:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:20:47 localhost systemd[1]: tmp-crun.p5QKc5.mount: Deactivated successfully. Nov 28 03:20:47 localhost podman[73765]: 2025-11-28 08:20:47.981587956 +0000 UTC m=+0.089009684 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:20:48 localhost podman[73765]: 2025-11-28 08:20:48.031679896 +0000 UTC m=+0.139101644 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Nov 28 03:20:48 localhost podman[73766]: 2025-11-28 08:20:48.040876903 +0000 UTC m=+0.142943984 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, vcs-type=git, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:20:48 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:20:48 localhost podman[73766]: 2025-11-28 08:20:48.078358331 +0000 UTC m=+0.180425392 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=) Nov 28 03:20:48 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:20:48 localhost podman[73767]: 2025-11-28 08:20:48.091767458 +0000 UTC m=+0.192867528 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 28 03:20:48 localhost podman[73764]: 2025-11-28 08:20:48.143892842 +0000 UTC m=+0.252645621 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.12, io.openshift.expose-services=) Nov 28 03:20:48 localhost podman[73764]: 2025-11-28 08:20:48.154323527 +0000 UTC m=+0.263076296 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, release=1761123044, version=17.1.12, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, architecture=x86_64) Nov 28 03:20:48 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:20:48 localhost podman[73767]: 2025-11-28 08:20:48.199104302 +0000 UTC m=+0.300204322 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:20:48 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:20:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:20:48 localhost podman[73853]: 2025-11-28 08:20:48.960694605 +0000 UTC m=+0.072825089 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z) Nov 28 03:20:49 localhost podman[73853]: 2025-11-28 08:20:49.349533317 +0000 UTC m=+0.461663831 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, container_name=nova_migration_target, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Nov 28 03:20:49 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:20:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:20:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:20:52 localhost podman[73878]: 2025-11-28 08:20:52.971428121 +0000 UTC m=+0.075092029 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:20:53 localhost podman[73877]: 2025-11-28 08:20:53.032926227 +0000 UTC m=+0.138269268 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:20:53 localhost podman[73878]: 2025-11-28 08:20:53.042529877 +0000 UTC m=+0.146193755 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 28 03:20:53 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:20:53 localhost podman[73877]: 2025-11-28 08:20:53.060472995 +0000 UTC m=+0.165816026 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, config_id=tripleo_step4, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1) Nov 28 03:20:53 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:21:09 localhost podman[74024]: 2025-11-28 08:21:09.796673922 +0000 UTC m=+0.088926201 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 03:21:09 localhost podman[74024]: 2025-11-28 08:21:09.88200234 +0000 UTC m=+0.174254669 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, version=7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Nov 28 03:21:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:21:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:21:15 localhost podman[74171]: 2025-11-28 08:21:15.980966067 +0000 UTC m=+0.085065631 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:51:28Z) Nov 28 03:21:16 localhost podman[74170]: 2025-11-28 08:21:16.042459822 +0000 UTC m=+0.148348692 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64) Nov 28 03:21:16 localhost podman[74171]: 2025-11-28 08:21:16.065867051 +0000 UTC m=+0.169966605 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, config_id=tripleo_step3, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Nov 28 03:21:16 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:21:16 localhost podman[74170]: 2025-11-28 08:21:16.243510875 +0000 UTC m=+0.349399765 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Nov 28 03:21:16 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:21:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:21:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:21:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:21:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:21:18 localhost systemd[1]: tmp-crun.QyUHX7.mount: Deactivated successfully. Nov 28 03:21:18 localhost podman[74221]: 2025-11-28 08:21:18.983334311 +0000 UTC m=+0.089932123 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:21:19 localhost podman[74223]: 2025-11-28 08:21:19.04077745 +0000 UTC m=+0.141258511 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044) Nov 28 03:21:19 localhost podman[74221]: 2025-11-28 08:21:19.064712846 +0000 UTC m=+0.171310618 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git) Nov 28 03:21:19 localhost podman[74223]: 2025-11-28 08:21:19.075240973 +0000 UTC m=+0.175722024 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team) Nov 28 03:21:19 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:21:19 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:21:19 localhost podman[74222]: 2025-11-28 08:21:19.147007779 +0000 UTC m=+0.249584346 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_id=tripleo_step3, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team) Nov 28 03:21:19 localhost podman[74222]: 2025-11-28 08:21:19.185597581 +0000 UTC m=+0.288174098 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, container_name=iscsid, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, vcs-type=git, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:21:19 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:21:19 localhost podman[74220]: 2025-11-28 08:21:19.190791583 +0000 UTC m=+0.300545474 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64) Nov 28 03:21:19 localhost podman[74220]: 2025-11-28 08:21:19.274702897 +0000 UTC m=+0.384456798 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 28 03:21:19 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:21:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:21:19 localhost podman[74308]: 2025-11-28 08:21:19.985097516 +0000 UTC m=+0.089046375 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:21:20 localhost podman[74308]: 2025-11-28 08:21:20.347935319 +0000 UTC m=+0.451884188 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:21:20 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:21:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:21:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:21:23 localhost podman[74333]: 2025-11-28 08:21:23.976633803 +0000 UTC m=+0.082718647 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, tcib_managed=true, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64) Nov 28 03:21:24 localhost podman[74333]: 2025-11-28 08:21:24.031308557 +0000 UTC m=+0.137393401 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Nov 28 03:21:24 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:21:24 localhost podman[74332]: 2025-11-28 08:21:24.035357932 +0000 UTC m=+0.143531131 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, release=1761123044, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:21:24 localhost podman[74332]: 2025-11-28 08:21:24.116562042 +0000 UTC m=+0.224735181 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4) Nov 28 03:21:24 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:21:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:21:46 localhost systemd[1]: tmp-crun.v6Entt.mount: Deactivated successfully. Nov 28 03:21:46 localhost podman[74380]: 2025-11-28 08:21:46.970306245 +0000 UTC m=+0.079262450 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:21:46 localhost systemd[1]: tmp-crun.AjbVmg.mount: Deactivated successfully. Nov 28 03:21:46 localhost podman[74381]: 2025-11-28 08:21:46.987426008 +0000 UTC m=+0.088811828 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:21:47 localhost podman[74381]: 2025-11-28 08:21:46.999561035 +0000 UTC m=+0.100946925 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1761123044, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 28 03:21:47 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:21:47 localhost podman[74380]: 2025-11-28 08:21:47.161791859 +0000 UTC m=+0.270748094 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.expose-services=) Nov 28 03:21:47 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:21:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:21:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:21:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:21:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:21:49 localhost podman[74430]: 2025-11-28 08:21:49.99610523 +0000 UTC m=+0.093836883 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Nov 28 03:21:50 localhost podman[74427]: 2025-11-28 08:21:50.027741705 +0000 UTC m=+0.136806171 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 28 03:21:50 localhost podman[74427]: 2025-11-28 08:21:50.039438281 +0000 UTC m=+0.148502757 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, vcs-type=git) Nov 28 03:21:50 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:21:50 localhost podman[74428]: 2025-11-28 08:21:50.090384367 +0000 UTC m=+0.193963732 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team) Nov 28 03:21:50 localhost podman[74428]: 2025-11-28 08:21:50.121509847 +0000 UTC m=+0.225089252 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:21:50 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:21:50 localhost podman[74429]: 2025-11-28 08:21:50.140721955 +0000 UTC m=+0.242210046 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-iscsid) Nov 28 03:21:50 localhost podman[74429]: 2025-11-28 08:21:50.152389399 +0000 UTC m=+0.253877530 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team) Nov 28 03:21:50 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:21:50 localhost podman[74430]: 2025-11-28 08:21:50.20861505 +0000 UTC m=+0.306346633 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:21:50 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:21:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:21:50 localhost podman[74520]: 2025-11-28 08:21:50.958882341 +0000 UTC m=+0.074256624 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:21:51 localhost podman[74520]: 2025-11-28 08:21:51.324568952 +0000 UTC m=+0.439943235 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:21:51 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:21:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:21:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:21:54 localhost systemd[1]: tmp-crun.IIkpyX.mount: Deactivated successfully. Nov 28 03:21:54 localhost podman[74544]: 2025-11-28 08:21:54.97486268 +0000 UTC m=+0.083652236 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:21:55 localhost systemd[1]: tmp-crun.Ub2HqM.mount: Deactivated successfully. Nov 28 03:21:55 localhost podman[74543]: 2025-11-28 08:21:55.03134418 +0000 UTC m=+0.140956052 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:21:55 localhost podman[74544]: 2025-11-28 08:21:55.040540236 +0000 UTC m=+0.149329762 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:21:55 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:21:55 localhost podman[74543]: 2025-11-28 08:21:55.057229686 +0000 UTC m=+0.166841568 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:21:55 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:22:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:22:17 localhost podman[74665]: 2025-11-28 08:22:17.983730155 +0000 UTC m=+0.085152404 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64) Nov 28 03:22:18 localhost podman[74666]: 2025-11-28 08:22:18.043458796 +0000 UTC m=+0.141794019 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:22:18 localhost podman[74666]: 2025-11-28 08:22:18.056570654 +0000 UTC m=+0.154905917 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.) Nov 28 03:22:18 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:22:18 localhost podman[74665]: 2025-11-28 08:22:18.241638569 +0000 UTC m=+0.343060838 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Nov 28 03:22:18 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:22:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:22:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:22:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:22:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:22:20 localhost podman[74715]: 2025-11-28 08:22:20.983351774 +0000 UTC m=+0.087463536 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 28 03:22:21 localhost podman[74716]: 2025-11-28 08:22:21.02912544 +0000 UTC m=+0.131755375 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, architecture=x86_64, distribution-scope=public) Nov 28 03:22:21 localhost podman[74716]: 2025-11-28 08:22:21.0403525 +0000 UTC m=+0.142982395 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=iscsid, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team) Nov 28 03:22:21 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:22:21 localhost podman[74717]: 2025-11-28 08:22:21.083912096 +0000 UTC m=+0.182028251 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:22:21 localhost podman[74714]: 2025-11-28 08:22:21.131132137 +0000 UTC m=+0.234600868 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.buildah.version=1.41.4) Nov 28 03:22:21 localhost podman[74715]: 2025-11-28 08:22:21.160129091 +0000 UTC m=+0.264240873 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Nov 28 03:22:21 localhost podman[74714]: 2025-11-28 08:22:21.167572272 +0000 UTC m=+0.271041023 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container) Nov 28 03:22:21 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:22:21 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:22:21 localhost podman[74717]: 2025-11-28 08:22:21.214419822 +0000 UTC m=+0.312535957 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Nov 28 03:22:21 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:22:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:22:21 localhost podman[74803]: 2025-11-28 08:22:21.969995419 +0000 UTC m=+0.078981471 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:22:22 localhost podman[74803]: 2025-11-28 08:22:22.340326915 +0000 UTC m=+0.449312917 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_migration_target, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:22:22 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:22:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:22:25 localhost systemd[1]: tmp-crun.e5H93g.mount: Deactivated successfully. Nov 28 03:22:25 localhost podman[74829]: 2025-11-28 08:22:25.985763971 +0000 UTC m=+0.088911731 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 03:22:26 localhost podman[74829]: 2025-11-28 08:22:26.031234147 +0000 UTC m=+0.134381907 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:22:26 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:22:26 localhost podman[74828]: 2025-11-28 08:22:26.034152038 +0000 UTC m=+0.138381971 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller) Nov 28 03:22:26 localhost podman[74828]: 2025-11-28 08:22:26.116353379 +0000 UTC m=+0.220583242 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:22:26 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:22:26 localhost python3[74924]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:27 localhost python3[74969]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764318146.5621905-114299-3191084048076/source _original_basename=tmpy_4008sd follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:22:28 localhost recover_tripleo_nova_virtqemud[75001]: 62642 Nov 28 03:22:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:22:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:22:28 localhost python3[74999]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:22:29 localhost ansible-async_wrapper.py[75173]: Invoked with 413905860756 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764318149.3304214-114685-167087183875994/AnsiballZ_command.py _ Nov 28 03:22:29 localhost ansible-async_wrapper.py[75176]: Starting module and watcher Nov 28 03:22:29 localhost ansible-async_wrapper.py[75176]: Start watching 75177 (3600) Nov 28 03:22:29 localhost ansible-async_wrapper.py[75177]: Start module (75177) Nov 28 03:22:29 localhost ansible-async_wrapper.py[75173]: Return async_wrapper task started. Nov 28 03:22:30 localhost python3[75197]: ansible-ansible.legacy.async_status Invoked with jid=413905860756.75173 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:22:33 localhost puppet-user[75196]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5 Nov 28 03:22:33 localhost puppet-user[75196]: (file: /etc/puppet/hiera.yaml) Nov 28 03:22:33 localhost puppet-user[75196]: Warning: Undefined variable '::deploy_config_name'; Nov 28 03:22:33 localhost puppet-user[75196]: (file & line not available) Nov 28 03:22:33 localhost puppet-user[75196]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html Nov 28 03:22:33 localhost puppet-user[75196]: (file & line not available) Nov 28 03:22:33 localhost puppet-user[75196]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8) Nov 28 03:22:33 localhost puppet-user[75196]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:22:33 localhost puppet-user[75196]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:22:33 localhost puppet-user[75196]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:22:33 localhost puppet-user[75196]: with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:22:33 localhost puppet-user[75196]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:22:33 localhost puppet-user[75196]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:22:33 localhost puppet-user[75196]: with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:22:33 localhost puppet-user[75196]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:22:33 localhost puppet-user[75196]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:22:33 localhost puppet-user[75196]: with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:22:33 localhost puppet-user[75196]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:22:33 localhost puppet-user[75196]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:22:33 localhost puppet-user[75196]: with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:22:33 localhost puppet-user[75196]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:22:33 localhost puppet-user[75196]: Warning: This method is deprecated, please use the stdlib validate_legacy function, Nov 28 03:22:33 localhost puppet-user[75196]: with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4] Nov 28 03:22:33 localhost puppet-user[75196]: (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation') Nov 28 03:22:33 localhost puppet-user[75196]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69) Nov 28 03:22:33 localhost puppet-user[75196]: Notice: Compiled catalog for np0005538515.localdomain in environment production in 0.20 seconds Nov 28 03:22:34 localhost puppet-user[75196]: Notice: Applied catalog in 0.25 seconds Nov 28 03:22:34 localhost puppet-user[75196]: Application: Nov 28 03:22:34 localhost puppet-user[75196]: Initial environment: production Nov 28 03:22:34 localhost puppet-user[75196]: Converged environment: production Nov 28 03:22:34 localhost puppet-user[75196]: Run mode: user Nov 28 03:22:34 localhost puppet-user[75196]: Changes: Nov 28 03:22:34 localhost puppet-user[75196]: Events: Nov 28 03:22:34 localhost puppet-user[75196]: Resources: Nov 28 03:22:34 localhost puppet-user[75196]: Total: 19 Nov 28 03:22:34 localhost puppet-user[75196]: Time: Nov 28 03:22:34 localhost puppet-user[75196]: Package: 0.00 Nov 28 03:22:34 localhost puppet-user[75196]: Schedule: 0.00 Nov 28 03:22:34 localhost puppet-user[75196]: Exec: 0.01 Nov 28 03:22:34 localhost puppet-user[75196]: Augeas: 0.01 Nov 28 03:22:34 localhost puppet-user[75196]: File: 0.02 Nov 28 03:22:34 localhost puppet-user[75196]: Service: 0.07 Nov 28 03:22:34 localhost puppet-user[75196]: Transaction evaluation: 0.24 Nov 28 03:22:34 localhost puppet-user[75196]: Catalog application: 0.25 Nov 28 03:22:34 localhost puppet-user[75196]: Config retrieval: 0.26 Nov 28 03:22:34 localhost puppet-user[75196]: Last run: 1764318154 Nov 28 03:22:34 localhost puppet-user[75196]: Filebucket: 0.00 Nov 28 03:22:34 localhost puppet-user[75196]: Total: 0.25 Nov 28 03:22:34 localhost puppet-user[75196]: Version: Nov 28 03:22:34 localhost puppet-user[75196]: Config: 1764318153 Nov 28 03:22:34 localhost puppet-user[75196]: Puppet: 7.10.0 Nov 28 03:22:34 localhost ansible-async_wrapper.py[75177]: Module complete (75177) Nov 28 03:22:34 localhost ansible-async_wrapper.py[75176]: Done in kid B. Nov 28 03:22:40 localhost python3[75335]: ansible-ansible.legacy.async_status Invoked with jid=413905860756.75173 mode=status _async_dir=/tmp/.ansible_async Nov 28 03:22:41 localhost python3[75351]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:22:41 localhost python3[75367]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:22:42 localhost python3[75417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:42 localhost python3[75435]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpjcsc8r6m recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None Nov 28 03:22:42 localhost python3[75465]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:44 localhost python3[75570]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None Nov 28 03:22:44 localhost python3[75589]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:45 localhost python3[75621]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:22:46 localhost python3[75671]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:46 localhost python3[75689]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:47 localhost python3[75751]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:47 localhost python3[75769]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:47 localhost python3[75831]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:48 localhost python3[75849]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:22:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:22:48 localhost sshd[75933]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:22:48 localhost podman[75913]: 2025-11-28 08:22:48.662684027 +0000 UTC m=+0.098804609 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044) Nov 28 03:22:48 localhost podman[75913]: 2025-11-28 08:22:48.684350402 +0000 UTC m=+0.120471034 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 28 03:22:48 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:22:48 localhost python3[75912]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:48 localhost podman[75911]: 2025-11-28 08:22:48.760137082 +0000 UTC m=+0.196463950 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, version=17.1.12, tcib_managed=true, config_id=tripleo_step1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044) Nov 28 03:22:48 localhost python3[75979]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:48 localhost podman[75911]: 2025-11-28 08:22:48.965471358 +0000 UTC m=+0.401798236 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true) Nov 28 03:22:48 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:22:49 localhost python3[76010]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:22:49 localhost systemd[1]: Reloading. Nov 28 03:22:49 localhost systemd-rc-local-generator[76033]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:22:49 localhost systemd-sysv-generator[76038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:22:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:22:50 localhost python3[76096]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:50 localhost python3[76114]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:51 localhost python3[76176]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True Nov 28 03:22:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:22:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:22:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:22:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:22:51 localhost systemd[1]: tmp-crun.RXpWM4.mount: Deactivated successfully. Nov 28 03:22:51 localhost systemd[1]: tmp-crun.Wck0jB.mount: Deactivated successfully. Nov 28 03:22:51 localhost podman[76197]: 2025-11-28 08:22:51.369242146 +0000 UTC m=+0.112633169 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, container_name=iscsid) Nov 28 03:22:51 localhost podman[76196]: 2025-11-28 08:22:51.324210704 +0000 UTC m=+0.067681750 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:22:51 localhost podman[76198]: 2025-11-28 08:22:51.347425296 +0000 UTC m=+0.081934213 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, release=1761123044, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Nov 28 03:22:51 localhost podman[76196]: 2025-11-28 08:22:51.405517626 +0000 UTC m=+0.148988742 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:22:51 localhost python3[76194]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:22:51 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:22:51 localhost podman[76198]: 2025-11-28 08:22:51.427949115 +0000 UTC m=+0.162458022 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-type=git, container_name=ceilometer_agent_compute, config_id=tripleo_step4) Nov 28 03:22:51 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:22:51 localhost podman[76195]: 2025-11-28 08:22:51.479186041 +0000 UTC m=+0.223292216 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Nov 28 03:22:51 localhost podman[76195]: 2025-11-28 08:22:51.514350057 +0000 UTC m=+0.258456272 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 28 03:22:51 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:22:51 localhost podman[76197]: 2025-11-28 08:22:51.532105489 +0000 UTC m=+0.275496502 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 03:22:51 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:22:51 localhost python3[76312]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:22:51 localhost systemd[1]: Reloading. Nov 28 03:22:52 localhost systemd-sysv-generator[76340]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:22:52 localhost systemd-rc-local-generator[76336]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:22:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:22:52 localhost systemd[1]: Starting Create netns directory... Nov 28 03:22:52 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 03:22:52 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 03:22:52 localhost systemd[1]: Finished Create netns directory. Nov 28 03:22:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:22:52 localhost podman[76370]: 2025-11-28 08:22:52.773028125 +0000 UTC m=+0.102775892 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_migration_target, vcs-type=git, url=https://www.redhat.com) Nov 28 03:22:52 localhost python3[76369]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6 Nov 28 03:22:53 localhost podman[76370]: 2025-11-28 08:22:53.106485122 +0000 UTC m=+0.436232919 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:22:53 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:22:53 localhost systemd[1]: tmp-crun.vqtUNR.mount: Deactivated successfully. Nov 28 03:22:54 localhost python3[76451]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False Nov 28 03:22:55 localhost podman[76491]: 2025-11-28 08:22:55.19115603 +0000 UTC m=+0.068663319 container create ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible) Nov 28 03:22:55 localhost systemd[1]: Started libpod-conmon-ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.scope. Nov 28 03:22:55 localhost systemd[1]: Started libcrun container. Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442f5016f7f6fcccf64f4788496955298bf5e3ac09b77bd37d30b08717aca4a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442f5016f7f6fcccf64f4788496955298bf5e3ac09b77bd37d30b08717aca4a/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442f5016f7f6fcccf64f4788496955298bf5e3ac09b77bd37d30b08717aca4a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442f5016f7f6fcccf64f4788496955298bf5e3ac09b77bd37d30b08717aca4a/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5442f5016f7f6fcccf64f4788496955298bf5e3ac09b77bd37d30b08717aca4a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:22:55 localhost podman[76491]: 2025-11-28 08:22:55.26173944 +0000 UTC m=+0.139246739 container init ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:22:55 localhost podman[76491]: 2025-11-28 08:22:55.163789218 +0000 UTC m=+0.041296517 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:22:55 localhost systemd[1]: tmp-crun.QAMzgk.mount: Deactivated successfully. Nov 28 03:22:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:22:55 localhost podman[76491]: 2025-11-28 08:22:55.29834613 +0000 UTC m=+0.175853449 container start ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:22:55 localhost systemd-logind[763]: Existing logind session ID 28 used by new audit session, ignoring. Nov 28 03:22:55 localhost python3[76451]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:22:55 localhost systemd[1]: Created slice User Slice of UID 0. Nov 28 03:22:55 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 28 03:22:55 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 28 03:22:55 localhost systemd[1]: Starting User Manager for UID 0... Nov 28 03:22:55 localhost podman[76513]: 2025-11-28 08:22:55.422199118 +0000 UTC m=+0.113692343 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:22:55 localhost podman[76513]: 2025-11-28 08:22:55.485428498 +0000 UTC m=+0.176921683 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044) Nov 28 03:22:55 localhost podman[76513]: unhealthy Nov 28 03:22:55 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:22:55 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 03:22:55 localhost systemd[76533]: Queued start job for default target Main User Target. Nov 28 03:22:55 localhost systemd[76533]: Created slice User Application Slice. Nov 28 03:22:55 localhost systemd[76533]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 28 03:22:55 localhost systemd[76533]: Started Daily Cleanup of User's Temporary Directories. Nov 28 03:22:55 localhost systemd[76533]: Reached target Paths. Nov 28 03:22:55 localhost systemd[76533]: Reached target Timers. Nov 28 03:22:55 localhost systemd[76533]: Starting D-Bus User Message Bus Socket... Nov 28 03:22:55 localhost systemd[76533]: Starting Create User's Volatile Files and Directories... Nov 28 03:22:55 localhost systemd[76533]: Listening on D-Bus User Message Bus Socket. Nov 28 03:22:55 localhost systemd[76533]: Reached target Sockets. Nov 28 03:22:55 localhost systemd[76533]: Finished Create User's Volatile Files and Directories. Nov 28 03:22:55 localhost systemd[76533]: Reached target Basic System. Nov 28 03:22:55 localhost systemd[76533]: Reached target Main User Target. Nov 28 03:22:55 localhost systemd[76533]: Startup finished in 127ms. Nov 28 03:22:55 localhost systemd[1]: Started User Manager for UID 0. Nov 28 03:22:55 localhost systemd[1]: Started Session c10 of User root. Nov 28 03:22:55 localhost systemd[1]: session-c10.scope: Deactivated successfully. Nov 28 03:22:55 localhost podman[76616]: 2025-11-28 08:22:55.819839044 +0000 UTC m=+0.074138461 container create 2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team) Nov 28 03:22:55 localhost systemd[1]: Started libpod-conmon-2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b.scope. Nov 28 03:22:55 localhost systemd[1]: Started libcrun container. Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3584b97526356ef5b6642175730f52565e80737460bbd74a6e31729b79699070/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3584b97526356ef5b6642175730f52565e80737460bbd74a6e31729b79699070/merged/var/log/nova supports timestamps until 2038 (0x7fffffff) Nov 28 03:22:55 localhost podman[76616]: 2025-11-28 08:22:55.782007255 +0000 UTC m=+0.036306712 image pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:22:55 localhost podman[76616]: 2025-11-28 08:22:55.885614083 +0000 UTC m=+0.139913490 container init 2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:22:55 localhost podman[76616]: 2025-11-28 08:22:55.895012936 +0000 UTC m=+0.149312373 container start 2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:22:55 localhost podman[76616]: 2025-11-28 08:22:55.895319426 +0000 UTC m=+0.149618873 container attach 2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, release=1761123044, version=17.1.12) Nov 28 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:22:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:22:56 localhost podman[76640]: 2025-11-28 08:22:56.229881877 +0000 UTC m=+0.088053414 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044) Nov 28 03:22:56 localhost podman[76641]: 2025-11-28 08:22:56.277705457 +0000 UTC m=+0.132088876 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 28 03:22:56 localhost podman[76640]: 2025-11-28 08:22:56.283347022 +0000 UTC m=+0.141518599 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com) Nov 28 03:22:56 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:22:56 localhost podman[76641]: 2025-11-28 08:22:56.302395716 +0000 UTC m=+0.156779115 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4) Nov 28 03:22:56 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:23:05 localhost systemd[1]: Stopping User Manager for UID 0... Nov 28 03:23:05 localhost systemd[76533]: Activating special unit Exit the Session... Nov 28 03:23:05 localhost systemd[76533]: Stopped target Main User Target. Nov 28 03:23:05 localhost systemd[76533]: Stopped target Basic System. Nov 28 03:23:05 localhost systemd[76533]: Stopped target Paths. Nov 28 03:23:05 localhost systemd[76533]: Stopped target Sockets. Nov 28 03:23:05 localhost systemd[76533]: Stopped target Timers. Nov 28 03:23:05 localhost systemd[76533]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 03:23:05 localhost systemd[76533]: Closed D-Bus User Message Bus Socket. Nov 28 03:23:05 localhost systemd[76533]: Stopped Create User's Volatile Files and Directories. Nov 28 03:23:05 localhost systemd[76533]: Removed slice User Application Slice. Nov 28 03:23:05 localhost systemd[76533]: Reached target Shutdown. Nov 28 03:23:05 localhost systemd[76533]: Finished Exit the Session. Nov 28 03:23:05 localhost systemd[76533]: Reached target Exit the Session. Nov 28 03:23:05 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 28 03:23:05 localhost systemd[1]: Stopped User Manager for UID 0. Nov 28 03:23:05 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 28 03:23:05 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 28 03:23:05 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 28 03:23:05 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 28 03:23:05 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 28 03:23:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:23:18 localhost podman[76764]: 2025-11-28 08:23:18.972347044 +0000 UTC m=+0.078560359 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T22:51:28Z, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044) Nov 28 03:23:18 localhost podman[76764]: 2025-11-28 08:23:18.987370442 +0000 UTC m=+0.093583757 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:23:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:23:19 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:23:19 localhost podman[76785]: 2025-11-28 08:23:19.094890591 +0000 UTC m=+0.082442779 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 03:23:19 localhost podman[76785]: 2025-11-28 08:23:19.317516726 +0000 UTC m=+0.305068884 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:23:19 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:23:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:23:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:23:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:23:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:23:22 localhost podman[76813]: 2025-11-28 08:23:22.011696821 +0000 UTC m=+0.123064184 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-cron, release=1761123044, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:23:22 localhost podman[76813]: 2025-11-28 08:23:22.023931572 +0000 UTC m=+0.135298885 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Nov 28 03:23:22 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:23:22 localhost podman[76815]: 2025-11-28 08:23:22.076136698 +0000 UTC m=+0.175709234 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc.) Nov 28 03:23:22 localhost podman[76815]: 2025-11-28 08:23:22.111293824 +0000 UTC m=+0.210866390 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, tcib_managed=true, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, release=1761123044) Nov 28 03:23:22 localhost podman[76821]: 2025-11-28 08:23:22.127241121 +0000 UTC m=+0.223980698 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute) Nov 28 03:23:22 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:23:22 localhost podman[76814]: 2025-11-28 08:23:22.180148369 +0000 UTC m=+0.286154326 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:23:22 localhost podman[76821]: 2025-11-28 08:23:22.207960575 +0000 UTC m=+0.304700132 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4) Nov 28 03:23:22 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:23:22 localhost podman[76814]: 2025-11-28 08:23:22.262307618 +0000 UTC m=+0.368313605 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4) Nov 28 03:23:22 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:23:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:23:23 localhost podman[76905]: 2025-11-28 08:23:23.979331304 +0000 UTC m=+0.085004888 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 28 03:23:24 localhost podman[76905]: 2025-11-28 08:23:24.355982406 +0000 UTC m=+0.461655960 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_migration_target, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 28 03:23:24 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:23:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:23:25 localhost podman[76929]: 2025-11-28 08:23:25.971315905 +0000 UTC m=+0.079126275 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Nov 28 03:23:26 localhost podman[76929]: 2025-11-28 08:23:26.02539848 +0000 UTC m=+0.133208790 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044) Nov 28 03:23:26 localhost podman[76929]: unhealthy Nov 28 03:23:26 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:23:26 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 03:23:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:23:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:23:26 localhost systemd[1]: tmp-crun.RgtATW.mount: Deactivated successfully. Nov 28 03:23:26 localhost podman[76951]: 2025-11-28 08:23:26.982249267 +0000 UTC m=+0.094014600 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public) Nov 28 03:23:27 localhost podman[76951]: 2025-11-28 08:23:27.012452238 +0000 UTC m=+0.124217551 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Nov 28 03:23:27 localhost systemd[1]: tmp-crun.05gSII.mount: Deactivated successfully. Nov 28 03:23:27 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:23:27 localhost podman[76952]: 2025-11-28 08:23:27.028957411 +0000 UTC m=+0.136724839 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.) Nov 28 03:23:27 localhost podman[76952]: 2025-11-28 08:23:27.069625528 +0000 UTC m=+0.177392986 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent) Nov 28 03:23:27 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:23:46 localhost systemd[1]: session-27.scope: Deactivated successfully. Nov 28 03:23:46 localhost systemd[1]: session-27.scope: Consumed 3.058s CPU time. Nov 28 03:23:46 localhost systemd-logind[763]: Session 27 logged out. Waiting for processes to exit. Nov 28 03:23:46 localhost systemd-logind[763]: Removed session 27. Nov 28 03:23:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:23:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:23:49 localhost systemd[1]: tmp-crun.lgX6Hs.mount: Deactivated successfully. Nov 28 03:23:49 localhost podman[77002]: 2025-11-28 08:23:49.989445495 +0000 UTC m=+0.093257151 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, version=17.1.12, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z) Nov 28 03:23:50 localhost podman[77001]: 2025-11-28 08:23:50.044156018 +0000 UTC m=+0.149709777 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Nov 28 03:23:50 localhost podman[77002]: 2025-11-28 08:23:50.054189257 +0000 UTC m=+0.158000953 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd) Nov 28 03:23:50 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:23:50 localhost podman[77001]: 2025-11-28 08:23:50.246535964 +0000 UTC m=+0.352089773 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.openshift.expose-services=, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:23:50 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:23:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:23:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:23:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:23:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:23:52 localhost systemd[1]: tmp-crun.evrH4H.mount: Deactivated successfully. Nov 28 03:23:52 localhost podman[77049]: 2025-11-28 08:23:52.980257889 +0000 UTC m=+0.090368321 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044) Nov 28 03:23:53 localhost podman[77049]: 2025-11-28 08:23:53.008652452 +0000 UTC m=+0.118762894 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 28 03:23:53 localhost systemd[1]: tmp-crun.OX92NG.mount: Deactivated successfully. Nov 28 03:23:53 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:23:53 localhost podman[77048]: 2025-11-28 08:23:53.026794291 +0000 UTC m=+0.136417378 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, release=1761123044, io.buildah.version=1.41.4, container_name=logrotate_crond, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:23:53 localhost podman[77048]: 2025-11-28 08:23:53.038594704 +0000 UTC m=+0.148217751 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:23:53 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:23:53 localhost podman[77050]: 2025-11-28 08:23:53.133466612 +0000 UTC m=+0.237336553 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Nov 28 03:23:53 localhost podman[77050]: 2025-11-28 08:23:53.146385679 +0000 UTC m=+0.250255600 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true) Nov 28 03:23:53 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:23:53 localhost podman[77051]: 2025-11-28 08:23:53.099430595 +0000 UTC m=+0.197792696 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:23:53 localhost podman[77051]: 2025-11-28 08:23:53.230180648 +0000 UTC m=+0.328542749 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:23:53 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:23:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:23:54 localhost podman[77140]: 2025-11-28 08:23:54.963733721 +0000 UTC m=+0.076195535 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:23:55 localhost podman[77140]: 2025-11-28 08:23:55.320339922 +0000 UTC m=+0.432801756 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:23:55 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:23:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:23:56 localhost podman[77164]: 2025-11-28 08:23:56.962300249 +0000 UTC m=+0.074544045 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 03:23:57 localhost podman[77164]: 2025-11-28 08:23:57.045479967 +0000 UTC m=+0.157723743 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.buildah.version=1.41.4, version=17.1.12, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Nov 28 03:23:57 localhost podman[77164]: unhealthy Nov 28 03:23:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:23:57 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:23:57 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 03:23:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:23:57 localhost podman[77186]: 2025-11-28 08:23:57.162590281 +0000 UTC m=+0.091471376 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com) Nov 28 03:23:57 localhost podman[77186]: 2025-11-28 08:23:57.216121067 +0000 UTC m=+0.145002152 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:23:57 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:23:57 localhost podman[77203]: 2025-11-28 08:23:57.232694137 +0000 UTC m=+0.065496635 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com) Nov 28 03:23:57 localhost podman[77203]: 2025-11-28 08:23:57.296514141 +0000 UTC m=+0.129316599 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_metadata_agent, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, tcib_managed=true, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:23:57 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:24:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:24:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:24:20 localhost systemd[1]: tmp-crun.Lcdxsb.mount: Deactivated successfully. Nov 28 03:24:20 localhost podman[77309]: 2025-11-28 08:24:20.975292501 +0000 UTC m=+0.083023305 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Nov 28 03:24:20 localhost podman[77309]: 2025-11-28 08:24:20.988639951 +0000 UTC m=+0.096370815 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step3) Nov 28 03:24:21 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:24:21 localhost systemd[1]: tmp-crun.m9QT4F.mount: Deactivated successfully. Nov 28 03:24:21 localhost podman[77308]: 2025-11-28 08:24:21.087562875 +0000 UTC m=+0.195222577 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd) Nov 28 03:24:21 localhost podman[77308]: 2025-11-28 08:24:21.282540514 +0000 UTC m=+0.390200246 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, container_name=metrics_qdr, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Nov 28 03:24:21 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:24:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:24:23 localhost podman[77358]: 2025-11-28 08:24:23.979105966 +0000 UTC m=+0.088874486 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1) Nov 28 03:24:24 localhost podman[77358]: 2025-11-28 08:24:24.014135213 +0000 UTC m=+0.123903703 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:24:24 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:24:24 localhost podman[77360]: 2025-11-28 08:24:24.033336244 +0000 UTC m=+0.136378027 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3) Nov 28 03:24:24 localhost podman[77360]: 2025-11-28 08:24:24.044537408 +0000 UTC m=+0.147579181 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1) Nov 28 03:24:24 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:24:24 localhost podman[77359]: 2025-11-28 08:24:24.13658276 +0000 UTC m=+0.243282436 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Nov 28 03:24:24 localhost podman[77359]: 2025-11-28 08:24:24.163834529 +0000 UTC m=+0.270534215 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:24:24 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:24:24 localhost podman[77361]: 2025-11-28 08:24:24.185890357 +0000 UTC m=+0.286932519 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 28 03:24:24 localhost podman[77361]: 2025-11-28 08:24:24.237144674 +0000 UTC m=+0.338186836 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:24:24 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:24:24 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:24:24 localhost recover_tripleo_nova_virtqemud[77450]: 62642 Nov 28 03:24:24 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:24:24 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:24:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:24:25 localhost podman[77451]: 2025-11-28 08:24:25.977252969 +0000 UTC m=+0.083603013 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:24:26 localhost podman[77451]: 2025-11-28 08:24:26.344987733 +0000 UTC m=+0.451337727 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Nov 28 03:24:26 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:24:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:24:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:24:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:24:27 localhost systemd[1]: tmp-crun.uMCz58.mount: Deactivated successfully. Nov 28 03:24:27 localhost podman[77474]: 2025-11-28 08:24:27.989124025 +0000 UTC m=+0.094689114 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044) Nov 28 03:24:28 localhost systemd[1]: tmp-crun.PtFeyc.mount: Deactivated successfully. Nov 28 03:24:28 localhost podman[77475]: 2025-11-28 08:24:28.033382937 +0000 UTC m=+0.135414136 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute) Nov 28 03:24:28 localhost podman[77474]: 2025-11-28 08:24:28.040413583 +0000 UTC m=+0.145978612 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, tcib_managed=true, version=17.1.12) Nov 28 03:24:28 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:24:28 localhost podman[77475]: 2025-11-28 08:24:28.083493349 +0000 UTC m=+0.185524488 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z) Nov 28 03:24:28 localhost podman[77475]: unhealthy Nov 28 03:24:28 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:24:28 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 03:24:28 localhost podman[77476]: 2025-11-28 08:24:28.084901492 +0000 UTC m=+0.183916369 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git) Nov 28 03:24:28 localhost podman[77476]: 2025-11-28 08:24:28.169525175 +0000 UTC m=+0.268540002 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:24:28 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:24:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:24:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:24:51 localhost systemd[1]: tmp-crun.dSkVk8.mount: Deactivated successfully. Nov 28 03:24:51 localhost podman[77543]: 2025-11-28 08:24:51.982786645 +0000 UTC m=+0.090887706 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:24:51 localhost podman[77543]: 2025-11-28 08:24:51.995540388 +0000 UTC m=+0.103641399 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 28 03:24:52 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:24:52 localhost podman[77542]: 2025-11-28 08:24:52.094550094 +0000 UTC m=+0.203586345 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64) Nov 28 03:24:52 localhost podman[77542]: 2025-11-28 08:24:52.31546103 +0000 UTC m=+0.424497251 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z) Nov 28 03:24:52 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:24:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:24:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:24:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:24:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:24:54 localhost podman[77597]: 2025-11-28 08:24:54.977345424 +0000 UTC m=+0.073291446 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:24:55 localhost podman[77590]: 2025-11-28 08:24:55.035345749 +0000 UTC m=+0.140391651 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, container_name=logrotate_crond, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20251118.1) Nov 28 03:24:55 localhost podman[77590]: 2025-11-28 08:24:55.0493755 +0000 UTC m=+0.154421352 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z) Nov 28 03:24:55 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:24:55 localhost podman[77597]: 2025-11-28 08:24:55.088262107 +0000 UTC m=+0.184208159 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 28 03:24:55 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:24:55 localhost podman[77591]: 2025-11-28 08:24:55.140032059 +0000 UTC m=+0.241383067 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:24:55 localhost podman[77592]: 2025-11-28 08:24:55.196554798 +0000 UTC m=+0.292161300 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64) Nov 28 03:24:55 localhost podman[77591]: 2025-11-28 08:24:55.22133049 +0000 UTC m=+0.322681518 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi) Nov 28 03:24:55 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:24:55 localhost podman[77592]: 2025-11-28 08:24:55.234565868 +0000 UTC m=+0.330172360 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3) Nov 28 03:24:55 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:24:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:24:56 localhost podman[77680]: 2025-11-28 08:24:56.973579829 +0000 UTC m=+0.078458655 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:24:57 localhost podman[77680]: 2025-11-28 08:24:57.370572362 +0000 UTC m=+0.475451148 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:24:57 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:24:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:24:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:24:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:24:58 localhost podman[77703]: 2025-11-28 08:24:58.971715682 +0000 UTC m=+0.082408107 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 28 03:24:59 localhost podman[77705]: 2025-11-28 08:24:59.029108008 +0000 UTC m=+0.130242848 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:24:59 localhost podman[77704]: 2025-11-28 08:24:59.086204225 +0000 UTC m=+0.190047168 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=) Nov 28 03:24:59 localhost podman[77705]: 2025-11-28 08:24:59.096427389 +0000 UTC m=+0.197562259 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com) Nov 28 03:24:59 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:24:59 localhost podman[77704]: 2025-11-28 08:24:59.138550175 +0000 UTC m=+0.242393118 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, release=1761123044, container_name=nova_compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:24:59 localhost podman[77704]: unhealthy Nov 28 03:24:59 localhost podman[77703]: 2025-11-28 08:24:59.150129681 +0000 UTC m=+0.260822126 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:24:59 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:24:59 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 03:24:59 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:25:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:25:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:25:22 localhost podman[77849]: 2025-11-28 08:25:22.97896612 +0000 UTC m=+0.083182410 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:25:22 localhost podman[77849]: 2025-11-28 08:25:22.98872446 +0000 UTC m=+0.092940820 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=collectd, release=1761123044, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, config_id=tripleo_step3) Nov 28 03:25:23 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:25:23 localhost podman[77848]: 2025-11-28 08:25:23.082442893 +0000 UTC m=+0.188948144 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 28 03:25:23 localhost podman[77848]: 2025-11-28 08:25:23.275707168 +0000 UTC m=+0.382212389 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com) Nov 28 03:25:23 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:25:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:25:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:25:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:25:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:25:25 localhost systemd[1]: tmp-crun.ANJNZj.mount: Deactivated successfully. Nov 28 03:25:25 localhost podman[77897]: 2025-11-28 08:25:25.989999554 +0000 UTC m=+0.093052583 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond) Nov 28 03:25:25 localhost podman[77897]: 2025-11-28 08:25:25.998581158 +0000 UTC m=+0.101634197 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, version=17.1.12, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public) Nov 28 03:25:26 localhost systemd[1]: tmp-crun.VdHkHM.mount: Deactivated successfully. Nov 28 03:25:26 localhost podman[77904]: 2025-11-28 08:25:26.039931901 +0000 UTC m=+0.132480567 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:25:26 localhost podman[77898]: 2025-11-28 08:25:26.083482331 +0000 UTC m=+0.183081324 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, release=1761123044, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:25:26 localhost podman[77904]: 2025-11-28 08:25:26.094648405 +0000 UTC m=+0.187197101 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, container_name=ceilometer_agent_compute) Nov 28 03:25:26 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:25:26 localhost podman[77899]: 2025-11-28 08:25:26.136461381 +0000 UTC m=+0.231861035 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, architecture=x86_64, release=1761123044, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Nov 28 03:25:26 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:25:26 localhost podman[77898]: 2025-11-28 08:25:26.166787603 +0000 UTC m=+0.266386566 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:25:26 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:25:26 localhost podman[77899]: 2025-11-28 08:25:26.217974439 +0000 UTC m=+0.313374103 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com) Nov 28 03:25:26 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:25:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:25:27 localhost podman[77988]: 2025-11-28 08:25:27.981148173 +0000 UTC m=+0.084188630 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:25:28 localhost podman[77988]: 2025-11-28 08:25:28.321494985 +0000 UTC m=+0.424535452 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:25:28 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:25:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:25:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:25:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:25:29 localhost podman[78014]: 2025-11-28 08:25:29.982828576 +0000 UTC m=+0.077244908 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:25:30 localhost podman[78014]: 2025-11-28 08:25:30.038588162 +0000 UTC m=+0.133004494 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, version=17.1.12, release=1761123044, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Nov 28 03:25:30 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:25:30 localhost podman[78013]: 2025-11-28 08:25:30.039397816 +0000 UTC m=+0.133747536 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 28 03:25:30 localhost podman[78012]: 2025-11-28 08:25:30.089192409 +0000 UTC m=+0.186866631 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller) Nov 28 03:25:30 localhost podman[78013]: 2025-11-28 08:25:30.123483623 +0000 UTC m=+0.217833373 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step5, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container) Nov 28 03:25:30 localhost podman[78013]: unhealthy Nov 28 03:25:30 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:25:30 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 03:25:30 localhost podman[78012]: 2025-11-28 08:25:30.170363826 +0000 UTC m=+0.268038128 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:25:30 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:25:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:25:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:25:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:25:53 localhost recover_tripleo_nova_virtqemud[78084]: 62642 Nov 28 03:25:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:25:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:25:53 localhost podman[78078]: 2025-11-28 08:25:53.993975611 +0000 UTC m=+0.099553275 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:25:54 localhost systemd[1]: tmp-crun.DWdj9E.mount: Deactivated successfully. Nov 28 03:25:54 localhost podman[78079]: 2025-11-28 08:25:54.055969128 +0000 UTC m=+0.158807887 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, version=17.1.12, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, tcib_managed=true) Nov 28 03:25:54 localhost podman[78079]: 2025-11-28 08:25:54.06710914 +0000 UTC m=+0.169947919 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, release=1761123044, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, container_name=collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, managed_by=tripleo_ansible) Nov 28 03:25:54 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:25:54 localhost podman[78078]: 2025-11-28 08:25:54.229727533 +0000 UTC m=+0.335305147 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, version=17.1.12) Nov 28 03:25:54 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:25:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:25:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:25:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:25:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:25:56 localhost podman[78128]: 2025-11-28 08:25:56.98367687 +0000 UTC m=+0.096494489 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.component=openstack-cron-container, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Nov 28 03:25:57 localhost podman[78128]: 2025-11-28 08:25:57.015648063 +0000 UTC m=+0.128465682 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 28 03:25:57 localhost systemd[1]: tmp-crun.QQu721.mount: Deactivated successfully. Nov 28 03:25:57 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:25:57 localhost podman[78129]: 2025-11-28 08:25:57.087981289 +0000 UTC m=+0.198111585 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64) Nov 28 03:25:57 localhost podman[78131]: 2025-11-28 08:25:57.139313509 +0000 UTC m=+0.242530473 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git) Nov 28 03:25:57 localhost podman[78129]: 2025-11-28 08:25:57.140204586 +0000 UTC m=+0.250334772 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, architecture=x86_64, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public) Nov 28 03:25:57 localhost podman[78130]: 2025-11-28 08:25:57.048492534 +0000 UTC m=+0.154708880 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:25:57 localhost podman[78130]: 2025-11-28 08:25:57.178720701 +0000 UTC m=+0.284937037 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:25:57 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:25:57 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:25:57 localhost podman[78131]: 2025-11-28 08:25:57.24241067 +0000 UTC m=+0.345627654 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z) Nov 28 03:25:57 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:25:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:25:58 localhost podman[78237]: 2025-11-28 08:25:58.581843089 +0000 UTC m=+0.065742314 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4) Nov 28 03:25:58 localhost podman[78237]: 2025-11-28 08:25:58.931421943 +0000 UTC m=+0.415321138 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:25:58 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:26:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:26:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:26:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:26:00 localhost podman[78324]: 2025-11-28 08:26:00.983317641 +0000 UTC m=+0.084390107 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 28 03:26:01 localhost podman[78324]: 2025-11-28 08:26:01.021784174 +0000 UTC m=+0.122856610 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible) Nov 28 03:26:01 localhost podman[78325]: 2025-11-28 08:26:01.039759707 +0000 UTC m=+0.138857743 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public) Nov 28 03:26:01 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:26:01 localhost podman[78326]: 2025-11-28 08:26:01.084511224 +0000 UTC m=+0.180005229 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 03:26:01 localhost podman[78325]: 2025-11-28 08:26:01.113331791 +0000 UTC m=+0.212429837 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:26:01 localhost podman[78326]: 2025-11-28 08:26:01.114093384 +0000 UTC m=+0.209587379 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:26:01 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:26:01 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:26:08 localhost systemd[1]: libpod-2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b.scope: Deactivated successfully. Nov 28 03:26:08 localhost podman[78401]: 2025-11-28 08:26:08.51154434 +0000 UTC m=+0.054709484 container died 2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, batch=17.1_20251118.1) Nov 28 03:26:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b-userdata-shm.mount: Deactivated successfully. Nov 28 03:26:08 localhost systemd[1]: var-lib-containers-storage-overlay-3584b97526356ef5b6642175730f52565e80737460bbd74a6e31729b79699070-merged.mount: Deactivated successfully. Nov 28 03:26:08 localhost podman[78401]: 2025-11-28 08:26:08.542015488 +0000 UTC m=+0.085180621 container cleanup 2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:26:08 localhost systemd[1]: libpod-conmon-2659a032c980f8a55db7d088ff7cf0c88c211e46e06e052e716c33cd12909e1b.scope: Deactivated successfully. Nov 28 03:26:08 localhost python3[76451]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=bbb5ea37891e3118676a78b59837de90 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 Nov 28 03:26:09 localhost python3[78456]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:26:09 localhost python3[78472]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Nov 28 03:26:10 localhost python3[78533]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764318369.4489896-119293-261036023283361/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:26:10 localhost python3[78549]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 03:26:10 localhost systemd[1]: Reloading. Nov 28 03:26:10 localhost systemd-rc-local-generator[78577]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:26:10 localhost systemd-sysv-generator[78580]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:26:10 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:26:11 localhost python3[78601]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 03:26:11 localhost systemd[1]: Reloading. Nov 28 03:26:11 localhost systemd-rc-local-generator[78625]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:26:11 localhost systemd-sysv-generator[78630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:26:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:26:11 localhost systemd[1]: Starting nova_compute container... Nov 28 03:26:11 localhost tripleo-start-podman-container[78641]: Creating additional drop-in dependency for "nova_compute" (ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0) Nov 28 03:26:11 localhost systemd[1]: Reloading. Nov 28 03:26:12 localhost systemd-sysv-generator[78706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 03:26:12 localhost systemd-rc-local-generator[78702]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 03:26:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 03:26:12 localhost systemd[1]: Started nova_compute container. Nov 28 03:26:12 localhost python3[78741]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:26:14 localhost python3[78862]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005538515 step=5 update_config_hash_only=False Nov 28 03:26:14 localhost python3[78878]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 03:26:15 localhost python3[78894]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True Nov 28 03:26:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:26:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:26:24 localhost podman[78973]: 2025-11-28 08:26:24.987763461 +0000 UTC m=+0.090718573 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-collectd-container, version=17.1.12, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Nov 28 03:26:25 localhost podman[78973]: 2025-11-28 08:26:25.004560278 +0000 UTC m=+0.107515390 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, container_name=collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z) Nov 28 03:26:25 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:26:25 localhost systemd[1]: tmp-crun.XAPXnZ.mount: Deactivated successfully. Nov 28 03:26:25 localhost podman[78972]: 2025-11-28 08:26:25.093049639 +0000 UTC m=+0.195881027 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, distribution-scope=public, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Nov 28 03:26:25 localhost podman[78972]: 2025-11-28 08:26:25.288463262 +0000 UTC m=+0.391294630 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, release=1761123044, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true) Nov 28 03:26:25 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:26:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:26:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:26:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:26:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:26:27 localhost podman[79023]: 2025-11-28 08:26:27.992209354 +0000 UTC m=+0.092162497 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12) Nov 28 03:26:28 localhost systemd[1]: tmp-crun.0JOBU6.mount: Deactivated successfully. Nov 28 03:26:28 localhost podman[79023]: 2025-11-28 08:26:28.053111488 +0000 UTC m=+0.153064631 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, io.openshift.expose-services=) Nov 28 03:26:28 localhost podman[79020]: 2025-11-28 08:26:28.053221011 +0000 UTC m=+0.161812529 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:26:28 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:26:28 localhost podman[79021]: 2025-11-28 08:26:28.136369388 +0000 UTC m=+0.241921253 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:26:28 localhost podman[79021]: 2025-11-28 08:26:28.194570169 +0000 UTC m=+0.300122084 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:26:28 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:26:28 localhost podman[79020]: 2025-11-28 08:26:28.234694354 +0000 UTC m=+0.343285842 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, distribution-scope=public, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:26:28 localhost podman[79022]: 2025-11-28 08:26:28.200837102 +0000 UTC m=+0.304936202 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T23:44:13Z) Nov 28 03:26:28 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:26:28 localhost podman[79022]: 2025-11-28 08:26:28.280746971 +0000 UTC m=+0.384846121 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 28 03:26:28 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:26:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:26:29 localhost podman[79111]: 2025-11-28 08:26:29.089317107 +0000 UTC m=+0.081888431 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:26:29 localhost podman[79111]: 2025-11-28 08:26:29.459697262 +0000 UTC m=+0.452268586 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:26:29 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:26:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:26:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:26:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:26:31 localhost podman[79135]: 2025-11-28 08:26:31.987944704 +0000 UTC m=+0.092791976 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team) Nov 28 03:26:32 localhost podman[79135]: 2025-11-28 08:26:32.004503493 +0000 UTC m=+0.109350766 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:26:32 localhost podman[79137]: 2025-11-28 08:26:32.024123127 +0000 UTC m=+0.121296572 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git) Nov 28 03:26:32 localhost podman[79137]: 2025-11-28 08:26:32.053388358 +0000 UTC m=+0.150561793 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Nov 28 03:26:32 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:26:32 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:26:32 localhost systemd[1]: tmp-crun.jV55Tw.mount: Deactivated successfully. Nov 28 03:26:32 localhost podman[79136]: 2025-11-28 08:26:32.197204702 +0000 UTC m=+0.299016010 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_compute, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true) Nov 28 03:26:32 localhost podman[79136]: 2025-11-28 08:26:32.252477263 +0000 UTC m=+0.354288591 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, distribution-scope=public, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 28 03:26:32 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:26:37 localhost sshd[79206]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:26:45 localhost sshd[79208]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:26:45 localhost systemd-logind[763]: New session 33 of user zuul. Nov 28 03:26:45 localhost systemd[1]: Started Session 33 of User zuul. Nov 28 03:26:46 localhost python3[79317]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 03:26:53 localhost python3[79581]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None Nov 28 03:26:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:26:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:26:55 localhost systemd[1]: tmp-crun.hpzPSh.mount: Deactivated successfully. Nov 28 03:26:55 localhost podman[79585]: 2025-11-28 08:26:55.917273083 +0000 UTC m=+0.091037701 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:26:55 localhost podman[79585]: 2025-11-28 08:26:55.925799776 +0000 UTC m=+0.099564394 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, container_name=collectd, version=17.1.12, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:26:55 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:26:56 localhost podman[79584]: 2025-11-28 08:26:56.021019245 +0000 UTC m=+0.194655309 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com) Nov 28 03:26:56 localhost podman[79584]: 2025-11-28 08:26:56.206655617 +0000 UTC m=+0.380291671 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Nov 28 03:26:56 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:26:57 localhost python3[79723]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None Nov 28 03:26:57 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled Nov 28 03:26:57 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation. Nov 28 03:26:57 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 03:26:57 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 03:26:57 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 03:26:57 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 03:26:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:26:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:26:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:26:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:26:58 localhost systemd[1]: tmp-crun.UHTBW1.mount: Deactivated successfully. Nov 28 03:26:58 localhost podman[79789]: 2025-11-28 08:26:58.981325631 +0000 UTC m=+0.086545824 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4) Nov 28 03:26:58 localhost podman[79789]: 2025-11-28 08:26:58.993203216 +0000 UTC m=+0.098423429 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:26:59 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:26:59 localhost systemd[1]: tmp-crun.UI7GCZ.mount: Deactivated successfully. Nov 28 03:26:59 localhost podman[79790]: 2025-11-28 08:26:59.035194728 +0000 UTC m=+0.140384020 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team) Nov 28 03:26:59 localhost podman[79792]: 2025-11-28 08:26:59.097808015 +0000 UTC m=+0.198169068 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:26:59 localhost podman[79791]: 2025-11-28 08:26:59.136804864 +0000 UTC m=+0.238121557 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:26:59 localhost podman[79790]: 2025-11-28 08:26:59.151327021 +0000 UTC m=+0.256516373 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, release=1761123044) Nov 28 03:26:59 localhost podman[79791]: 2025-11-28 08:26:59.150448913 +0000 UTC m=+0.251765616 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:26:59 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:26:59 localhost podman[79792]: 2025-11-28 08:26:59.200846154 +0000 UTC m=+0.301207257 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:26:59 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:26:59 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:26:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:26:59 localhost podman[79883]: 2025-11-28 08:26:59.974469746 +0000 UTC m=+0.084695518 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target) Nov 28 03:27:00 localhost podman[79883]: 2025-11-28 08:27:00.355614582 +0000 UTC m=+0.465840314 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 03:27:00 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:27:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:27:02 localhost podman[79906]: 2025-11-28 08:27:02.979719882 +0000 UTC m=+0.086554484 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 28 03:27:03 localhost podman[79906]: 2025-11-28 08:27:03.030478684 +0000 UTC m=+0.137313286 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4) Nov 28 03:27:03 localhost podman[79907]: 2025-11-28 08:27:03.045360153 +0000 UTC m=+0.147044676 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:27:03 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:27:03 localhost podman[79908]: 2025-11-28 08:27:03.089985316 +0000 UTC m=+0.189019737 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:27:03 localhost podman[79907]: 2025-11-28 08:27:03.098636892 +0000 UTC m=+0.200321405 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, container_name=nova_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 03:27:03 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:27:03 localhost podman[79908]: 2025-11-28 08:27:03.125510858 +0000 UTC m=+0.224545269 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Nov 28 03:27:03 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:27:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:27:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:27:26 localhost podman[80054]: 2025-11-28 08:27:26.983942595 +0000 UTC m=+0.084019506 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64) Nov 28 03:27:27 localhost podman[80054]: 2025-11-28 08:27:27.020979834 +0000 UTC m=+0.121056795 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:27:27 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:27:27 localhost podman[80053]: 2025-11-28 08:27:27.044415276 +0000 UTC m=+0.146601712 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 28 03:27:27 localhost podman[80053]: 2025-11-28 08:27:27.221965638 +0000 UTC m=+0.324152144 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:27:27 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:27:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:27:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:27:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:27:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:27:29 localhost podman[80104]: 2025-11-28 08:27:29.987955794 +0000 UTC m=+0.092390713 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.buildah.version=1.41.4, distribution-scope=public, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc.) Nov 28 03:27:30 localhost podman[80104]: 2025-11-28 08:27:30.022465857 +0000 UTC m=+0.126900756 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Nov 28 03:27:30 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:27:30 localhost systemd[1]: tmp-crun.9SEdAm.mount: Deactivated successfully. Nov 28 03:27:30 localhost podman[80103]: 2025-11-28 08:27:30.048716554 +0000 UTC m=+0.154333669 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, release=1761123044) Nov 28 03:27:30 localhost podman[80103]: 2025-11-28 08:27:30.076602872 +0000 UTC m=+0.182219987 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12) Nov 28 03:27:30 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:27:30 localhost podman[80105]: 2025-11-28 08:27:30.099643521 +0000 UTC m=+0.197691943 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible) Nov 28 03:27:30 localhost podman[80105]: 2025-11-28 08:27:30.132651526 +0000 UTC m=+0.230699938 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, distribution-scope=public, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:27:30 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:27:30 localhost podman[80102]: 2025-11-28 08:27:30.200218855 +0000 UTC m=+0.308420020 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container) Nov 28 03:27:30 localhost podman[80102]: 2025-11-28 08:27:30.21434504 +0000 UTC m=+0.322546185 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, release=1761123044, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1) Nov 28 03:27:30 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:27:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:27:30 localhost podman[80190]: 2025-11-28 08:27:30.984444502 +0000 UTC m=+0.083831740 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, managed_by=tripleo_ansible) Nov 28 03:27:31 localhost podman[80190]: 2025-11-28 08:27:31.355837019 +0000 UTC m=+0.455224317 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044) Nov 28 03:27:31 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:27:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:27:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:27:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:27:33 localhost podman[80215]: 2025-11-28 08:27:33.987018028 +0000 UTC m=+0.084989036 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z) Nov 28 03:27:34 localhost podman[80214]: 2025-11-28 08:27:34.033550129 +0000 UTC m=+0.134769627 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, version=17.1.12, architecture=x86_64) Nov 28 03:27:34 localhost podman[80215]: 2025-11-28 08:27:34.088527231 +0000 UTC m=+0.186498219 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, vcs-type=git, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64) Nov 28 03:27:34 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:27:34 localhost podman[80213]: 2025-11-28 08:27:34.095134164 +0000 UTC m=+0.199185919 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Nov 28 03:27:34 localhost podman[80213]: 2025-11-28 08:27:34.178438967 +0000 UTC m=+0.282490692 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:27:34 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:27:34 localhost podman[80214]: 2025-11-28 08:27:34.19671422 +0000 UTC m=+0.297933718 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 28 03:27:34 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:27:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:27:53 localhost recover_tripleo_nova_virtqemud[80291]: 62642 Nov 28 03:27:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:27:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:27:57 localhost systemd[1]: session-33.scope: Deactivated successfully. Nov 28 03:27:57 localhost systemd[1]: session-33.scope: Consumed 5.649s CPU time. Nov 28 03:27:57 localhost systemd-logind[763]: Session 33 logged out. Waiting for processes to exit. Nov 28 03:27:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:27:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:27:57 localhost systemd-logind[763]: Removed session 33. Nov 28 03:27:57 localhost systemd[1]: tmp-crun.lgO1BD.mount: Deactivated successfully. Nov 28 03:27:57 localhost podman[80293]: 2025-11-28 08:27:57.498654376 +0000 UTC m=+0.097388187 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:27:57 localhost podman[80293]: 2025-11-28 08:27:57.508936992 +0000 UTC m=+0.107670763 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:27:57 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:27:57 localhost podman[80292]: 2025-11-28 08:27:57.589295305 +0000 UTC m=+0.190037788 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 03:27:57 localhost podman[80292]: 2025-11-28 08:27:57.780530358 +0000 UTC m=+0.381272851 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Nov 28 03:27:57 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:28:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:28:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:28:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:28:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:28:00 localhost systemd[1]: tmp-crun.dvIp9X.mount: Deactivated successfully. Nov 28 03:28:00 localhost podman[80385]: 2025-11-28 08:28:00.990321688 +0000 UTC m=+0.092575820 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12) Nov 28 03:28:01 localhost podman[80385]: 2025-11-28 08:28:01.003643616 +0000 UTC m=+0.105897758 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:28:01 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:28:01 localhost podman[80386]: 2025-11-28 08:28:01.045888727 +0000 UTC m=+0.143845757 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:28:01 localhost podman[80383]: 2025-11-28 08:28:01.085145006 +0000 UTC m=+0.191083010 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1) Nov 28 03:28:01 localhost podman[80383]: 2025-11-28 08:28:01.090813609 +0000 UTC m=+0.196751573 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:28:01 localhost podman[80386]: 2025-11-28 08:28:01.099444933 +0000 UTC m=+0.197401903 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4) Nov 28 03:28:01 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:28:01 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:28:01 localhost podman[80384]: 2025-11-28 08:28:01.139621171 +0000 UTC m=+0.245711300 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:28:01 localhost podman[80384]: 2025-11-28 08:28:01.162059206 +0000 UTC m=+0.268149325 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:12:45Z) Nov 28 03:28:01 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:28:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:28:01 localhost podman[80473]: 2025-11-28 08:28:01.971988309 +0000 UTC m=+0.082974956 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:28:02 localhost podman[80473]: 2025-11-28 08:28:02.347391652 +0000 UTC m=+0.458378289 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64) Nov 28 03:28:02 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:28:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:28:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:28:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:28:04 localhost systemd[1]: tmp-crun.Nptik9.mount: Deactivated successfully. Nov 28 03:28:04 localhost podman[80496]: 2025-11-28 08:28:04.982380752 +0000 UTC m=+0.089824706 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_controller, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4) Nov 28 03:28:05 localhost podman[80497]: 2025-11-28 08:28:05.032281138 +0000 UTC m=+0.137935227 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_id=tripleo_step5) Nov 28 03:28:05 localhost podman[80497]: 2025-11-28 08:28:05.087515256 +0000 UTC m=+0.193169385 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, container_name=nova_compute, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible) Nov 28 03:28:05 localhost podman[80498]: 2025-11-28 08:28:05.094348244 +0000 UTC m=+0.196158676 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent) Nov 28 03:28:05 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:28:05 localhost podman[80496]: 2025-11-28 08:28:05.108869318 +0000 UTC m=+0.216313312 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:28:05 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:28:05 localhost podman[80498]: 2025-11-28 08:28:05.16259383 +0000 UTC m=+0.264404242 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:28:05 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:28:11 localhost sshd[80570]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:28:11 localhost systemd-logind[763]: New session 34 of user zuul. Nov 28 03:28:11 localhost systemd[1]: Started Session 34 of User zuul. Nov 28 03:28:12 localhost python3[80589]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 03:28:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:28:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:28:27 localhost podman[80669]: 2025-11-28 08:28:27.983771453 +0000 UTC m=+0.087381571 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, config_id=tripleo_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 28 03:28:28 localhost systemd[1]: tmp-crun.RmCDmp.mount: Deactivated successfully. Nov 28 03:28:28 localhost podman[80670]: 2025-11-28 08:28:28.042192469 +0000 UTC m=+0.145665953 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd) Nov 28 03:28:28 localhost podman[80670]: 2025-11-28 08:28:28.056571758 +0000 UTC m=+0.160045232 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4) Nov 28 03:28:28 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:28:28 localhost podman[80669]: 2025-11-28 08:28:28.170722217 +0000 UTC m=+0.274332365 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., container_name=metrics_qdr, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:28:28 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:28:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:28:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:28:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:28:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:28:31 localhost systemd[1]: tmp-crun.lmgIXF.mount: Deactivated successfully. Nov 28 03:28:31 localhost podman[80720]: 2025-11-28 08:28:31.9804972 +0000 UTC m=+0.086684749 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true) Nov 28 03:28:32 localhost podman[80719]: 2025-11-28 08:28:31.96021896 +0000 UTC m=+0.073448665 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 28 03:28:32 localhost podman[80721]: 2025-11-28 08:28:32.018003047 +0000 UTC m=+0.125164107 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=) Nov 28 03:28:32 localhost podman[80721]: 2025-11-28 08:28:32.030352384 +0000 UTC m=+0.137513414 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true) Nov 28 03:28:32 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:28:32 localhost podman[80727]: 2025-11-28 08:28:32.072993307 +0000 UTC m=+0.168629605 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git) Nov 28 03:28:32 localhost podman[80720]: 2025-11-28 08:28:32.085749527 +0000 UTC m=+0.191937096 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044) Nov 28 03:28:32 localhost podman[80719]: 2025-11-28 08:28:32.096828216 +0000 UTC m=+0.210057991 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z) Nov 28 03:28:32 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:28:32 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:28:32 localhost podman[80727]: 2025-11-28 08:28:32.129502315 +0000 UTC m=+0.225138593 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:28:32 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:28:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:28:32 localhost podman[80812]: 2025-11-28 08:28:32.970531398 +0000 UTC m=+0.078933183 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:28:33 localhost podman[80812]: 2025-11-28 08:28:33.373532765 +0000 UTC m=+0.481934570 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, container_name=nova_migration_target) Nov 28 03:28:33 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:28:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:28:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 4386 writes, 20K keys, 4386 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4386 writes, 493 syncs, 8.90 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:28:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:28:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:28:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:28:35 localhost systemd[1]: tmp-crun.iWD794.mount: Deactivated successfully. Nov 28 03:28:36 localhost podman[80837]: 2025-11-28 08:28:36.004668467 +0000 UTC m=+0.103365371 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:28:36 localhost podman[80835]: 2025-11-28 08:28:35.970651777 +0000 UTC m=+0.077873531 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64) Nov 28 03:28:36 localhost podman[80837]: 2025-11-28 08:28:36.051444696 +0000 UTC m=+0.150141600 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public) Nov 28 03:28:36 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:28:36 localhost podman[80835]: 2025-11-28 08:28:36.10162922 +0000 UTC m=+0.208850914 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044) Nov 28 03:28:36 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:28:36 localhost podman[80836]: 2025-11-28 08:28:36.194312073 +0000 UTC m=+0.296136242 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public) Nov 28 03:28:36 localhost podman[80836]: 2025-11-28 08:28:36.220053579 +0000 UTC m=+0.321877728 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:28:36 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:28:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:28:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 5246 writes, 23K keys, 5246 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5246 writes, 540 syncs, 9.71 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:28:40 localhost python3[80921]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None Nov 28 03:28:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 03:28:43 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 03:28:43 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 03:28:44 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 03:28:44 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 03:28:44 localhost systemd[1]: run-r445c5d9cc9674500ae668767b7f7d736.service: Deactivated successfully. Nov 28 03:28:44 localhost systemd[1]: run-r5464bbe0a6d4419991b2b9de385bfc38.service: Deactivated successfully. Nov 28 03:28:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:28:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:28:58 localhost systemd[1]: tmp-crun.5Z7Ib8.mount: Deactivated successfully. Nov 28 03:28:58 localhost podman[81075]: 2025-11-28 08:28:58.983201298 +0000 UTC m=+0.088637089 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team) Nov 28 03:28:59 localhost podman[81074]: 2025-11-28 08:28:59.031274598 +0000 UTC m=+0.137531624 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:28:59 localhost podman[81075]: 2025-11-28 08:28:59.042917744 +0000 UTC m=+0.148353545 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public) Nov 28 03:28:59 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:28:59 localhost podman[81074]: 2025-11-28 08:28:59.224411911 +0000 UTC m=+0.330668957 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z) Nov 28 03:28:59 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:29:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:29:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:29:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:29:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:29:02 localhost podman[81167]: 2025-11-28 08:29:02.987956111 +0000 UTC m=+0.090139346 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step4, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public) Nov 28 03:29:03 localhost podman[81167]: 2025-11-28 08:29:03.005485377 +0000 UTC m=+0.107668672 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container) Nov 28 03:29:03 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:29:03 localhost podman[81175]: 2025-11-28 08:29:03.047918113 +0000 UTC m=+0.136808431 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=) Nov 28 03:29:03 localhost podman[81169]: 2025-11-28 08:29:03.104906286 +0000 UTC m=+0.197702094 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Nov 28 03:29:03 localhost podman[81169]: 2025-11-28 08:29:03.142525005 +0000 UTC m=+0.235320773 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:29:03 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:29:03 localhost podman[81175]: 2025-11-28 08:29:03.157801472 +0000 UTC m=+0.246691800 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z) Nov 28 03:29:03 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:29:03 localhost podman[81168]: 2025-11-28 08:29:03.157568465 +0000 UTC m=+0.257013636 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:29:03 localhost podman[81168]: 2025-11-28 08:29:03.240477098 +0000 UTC m=+0.339922219 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4) Nov 28 03:29:03 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:29:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:29:03 localhost podman[81259]: 2025-11-28 08:29:03.974931604 +0000 UTC m=+0.082071139 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_migration_target, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:29:03 localhost systemd[1]: tmp-crun.IQO7N0.mount: Deactivated successfully. Nov 28 03:29:04 localhost podman[81259]: 2025-11-28 08:29:04.356543448 +0000 UTC m=+0.463682983 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_migration_target, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:29:04 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:29:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:29:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:29:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:29:07 localhost systemd[1]: tmp-crun.O6OV95.mount: Deactivated successfully. Nov 28 03:29:07 localhost podman[81284]: 2025-11-28 08:29:07.028512526 +0000 UTC m=+0.121638358 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:29:07 localhost podman[81282]: 2025-11-28 08:29:06.993770805 +0000 UTC m=+0.097591814 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:29:07 localhost podman[81282]: 2025-11-28 08:29:07.078471093 +0000 UTC m=+0.182292072 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 28 03:29:07 localhost podman[81283]: 2025-11-28 08:29:07.078594507 +0000 UTC m=+0.175801383 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64) Nov 28 03:29:07 localhost podman[81284]: 2025-11-28 08:29:07.092523733 +0000 UTC m=+0.185649565 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, vcs-type=git, architecture=x86_64, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 28 03:29:07 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:29:07 localhost podman[81283]: 2025-11-28 08:29:07.135516257 +0000 UTC m=+0.232723153 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:29:07 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:29:07 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:29:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:29:23 localhost recover_tripleo_nova_virtqemud[81356]: 62642 Nov 28 03:29:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:29:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:29:24 localhost python3[81372]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 03:29:27 localhost sshd[81505]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:29:28 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 03:29:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:29:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:29:29 localhost podman[81632]: 2025-11-28 08:29:29.984699628 +0000 UTC m=+0.087816765 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd) Nov 28 03:29:30 localhost podman[81632]: 2025-11-28 08:29:30.02241433 +0000 UTC m=+0.125531437 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:29:30 localhost systemd[1]: tmp-crun.d1G0bb.mount: Deactivated successfully. Nov 28 03:29:30 localhost podman[81631]: 2025-11-28 08:29:30.041680889 +0000 UTC m=+0.145081115 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:29:30 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:29:30 localhost podman[81631]: 2025-11-28 08:29:30.244375274 +0000 UTC m=+0.347775480 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044) Nov 28 03:29:30 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:29:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:29:33 localhost podman[81740]: 2025-11-28 08:29:33.980233108 +0000 UTC m=+0.088735473 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public) Nov 28 03:29:34 localhost podman[81741]: 2025-11-28 08:29:34.035145316 +0000 UTC m=+0.140855215 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12) Nov 28 03:29:34 localhost podman[81741]: 2025-11-28 08:29:34.089537148 +0000 UTC m=+0.195247007 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:29:34 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:29:34 localhost podman[81742]: 2025-11-28 08:29:34.139172725 +0000 UTC m=+0.242586435 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, vcs-type=git, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:29:34 localhost podman[81743]: 2025-11-28 08:29:34.095280563 +0000 UTC m=+0.197457995 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:29:34 localhost podman[81740]: 2025-11-28 08:29:34.165785898 +0000 UTC m=+0.274288263 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Nov 28 03:29:34 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:29:34 localhost podman[81743]: 2025-11-28 08:29:34.177427605 +0000 UTC m=+0.279605067 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vcs-type=git, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:29:34 localhost podman[81742]: 2025-11-28 08:29:34.198697934 +0000 UTC m=+0.302111654 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.expose-services=, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:29:34 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:29:34 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:29:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:29:34 localhost podman[81832]: 2025-11-28 08:29:34.980950331 +0000 UTC m=+0.087948658 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:29:35 localhost podman[81832]: 2025-11-28 08:29:35.365548856 +0000 UTC m=+0.472547203 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.expose-services=) Nov 28 03:29:35 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:29:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:29:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:29:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:29:37 localhost podman[81855]: 2025-11-28 08:29:37.975645594 +0000 UTC m=+0.083159522 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:29:38 localhost systemd[1]: tmp-crun.1m3IVo.mount: Deactivated successfully. Nov 28 03:29:38 localhost systemd[1]: tmp-crun.gYxxDb.mount: Deactivated successfully. Nov 28 03:29:38 localhost podman[81856]: 2025-11-28 08:29:38.04816022 +0000 UTC m=+0.152393488 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Nov 28 03:29:38 localhost podman[81855]: 2025-11-28 08:29:38.061416096 +0000 UTC m=+0.168929974 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=) Nov 28 03:29:38 localhost podman[81857]: 2025-11-28 08:29:38.019424902 +0000 UTC m=+0.115249973 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1) Nov 28 03:29:38 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:29:38 localhost podman[81856]: 2025-11-28 08:29:38.079318492 +0000 UTC m=+0.183551770 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:29:38 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:29:38 localhost podman[81857]: 2025-11-28 08:29:38.103396239 +0000 UTC m=+0.199221310 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git) Nov 28 03:29:38 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:30:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:30:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:30:00 localhost podman[81929]: 2025-11-28 08:30:00.978549272 +0000 UTC m=+0.087266547 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, release=1761123044, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, url=https://www.redhat.com) Nov 28 03:30:01 localhost systemd[1]: tmp-crun.Q4dXx5.mount: Deactivated successfully. Nov 28 03:30:01 localhost podman[81930]: 2025-11-28 08:30:01.044134686 +0000 UTC m=+0.147242390 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, container_name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:30:01 localhost podman[81930]: 2025-11-28 08:30:01.056391631 +0000 UTC m=+0.159499365 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, container_name=collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044) Nov 28 03:30:01 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:30:01 localhost podman[81929]: 2025-11-28 08:30:01.148684471 +0000 UTC m=+0.257401716 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:30:01 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:30:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:30:04 localhost systemd[1]: tmp-crun.aLqEjQ.mount: Deactivated successfully. Nov 28 03:30:04 localhost podman[82020]: 2025-11-28 08:30:04.97954114 +0000 UTC m=+0.078122939 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Nov 28 03:30:04 localhost podman[82021]: 2025-11-28 08:30:04.994913229 +0000 UTC m=+0.093545379 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:30:05 localhost podman[82020]: 2025-11-28 08:30:05.037472871 +0000 UTC m=+0.136054650 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Nov 28 03:30:05 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:30:05 localhost podman[82021]: 2025-11-28 08:30:05.05578831 +0000 UTC m=+0.154420440 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, architecture=x86_64, container_name=iscsid, config_id=tripleo_step3, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:30:05 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:30:05 localhost podman[82019]: 2025-11-28 08:30:05.039687368 +0000 UTC m=+0.145402225 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:30:05 localhost podman[82025]: 2025-11-28 08:30:05.104360374 +0000 UTC m=+0.197483076 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container) Nov 28 03:30:05 localhost podman[82019]: 2025-11-28 08:30:05.119624251 +0000 UTC m=+0.225339088 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 03:30:05 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:30:05 localhost podman[82025]: 2025-11-28 08:30:05.163619236 +0000 UTC m=+0.256741918 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:30:05 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:30:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:30:05 localhost systemd[1]: tmp-crun.OgtHXq.mount: Deactivated successfully. Nov 28 03:30:05 localhost podman[82105]: 2025-11-28 08:30:05.974847538 +0000 UTC m=+0.081603285 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4) Nov 28 03:30:06 localhost podman[82105]: 2025-11-28 08:30:06.390543203 +0000 UTC m=+0.497298910 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, container_name=nova_migration_target, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container) Nov 28 03:30:06 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:30:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:30:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:30:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:30:08 localhost systemd[1]: tmp-crun.qJ97TB.mount: Deactivated successfully. Nov 28 03:30:08 localhost podman[82128]: 2025-11-28 08:30:08.983193968 +0000 UTC m=+0.093408615 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com) Nov 28 03:30:09 localhost systemd[1]: tmp-crun.tf2hrs.mount: Deactivated successfully. Nov 28 03:30:09 localhost podman[82128]: 2025-11-28 08:30:09.027654397 +0000 UTC m=+0.137869004 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:34:05Z, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:30:09 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:30:09 localhost podman[82129]: 2025-11-28 08:30:09.073353654 +0000 UTC m=+0.180637481 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:30:09 localhost podman[82130]: 2025-11-28 08:30:09.031604848 +0000 UTC m=+0.135363788 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 03:30:09 localhost podman[82129]: 2025-11-28 08:30:09.095214012 +0000 UTC m=+0.202497909 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, distribution-scope=public, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z) Nov 28 03:30:09 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:30:09 localhost podman[82130]: 2025-11-28 08:30:09.114650646 +0000 UTC m=+0.218409596 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:30:09 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:30:26 localhost sshd[82200]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:30:30 localhost python3[82294]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:30:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:30:31 localhost podman[82297]: 2025-11-28 08:30:31.996636302 +0000 UTC m=+0.096899713 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-qdrouterd, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, version=17.1.12, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 28 03:30:32 localhost systemd[1]: tmp-crun.uIiXHv.mount: Deactivated successfully. Nov 28 03:30:32 localhost podman[82299]: 2025-11-28 08:30:32.040195234 +0000 UTC m=+0.136563745 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, name=rhosp17/openstack-collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:30:32 localhost podman[82299]: 2025-11-28 08:30:32.077551035 +0000 UTC m=+0.173919546 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:30:32 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:30:32 localhost podman[82297]: 2025-11-28 08:30:32.170110494 +0000 UTC m=+0.270373895 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 28 03:30:32 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:30:33 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 03:30:34 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:30:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:30:35 localhost systemd[1]: tmp-crun.3m5ode.mount: Deactivated successfully. Nov 28 03:30:35 localhost podman[82471]: 2025-11-28 08:30:35.992711478 +0000 UTC m=+0.102406720 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 28 03:30:36 localhost systemd[1]: tmp-crun.YPjbCG.mount: Deactivated successfully. Nov 28 03:30:36 localhost podman[82471]: 2025-11-28 08:30:36.032218456 +0000 UTC m=+0.141913718 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:30:36 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:30:36 localhost podman[82472]: 2025-11-28 08:30:36.038191809 +0000 UTC m=+0.145299112 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:30:36 localhost podman[82473]: 2025-11-28 08:30:36.097945084 +0000 UTC m=+0.201958103 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3) Nov 28 03:30:36 localhost podman[82472]: 2025-11-28 08:30:36.116952766 +0000 UTC m=+0.224060049 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:30:36 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:30:36 localhost podman[82473]: 2025-11-28 08:30:36.135573594 +0000 UTC m=+0.239586613 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=) Nov 28 03:30:36 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:30:36 localhost podman[82479]: 2025-11-28 08:30:36.21037261 +0000 UTC m=+0.308613862 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, tcib_managed=true) Nov 28 03:30:36 localhost podman[82479]: 2025-11-28 08:30:36.237460339 +0000 UTC m=+0.335701641 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 28 03:30:36 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:30:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:30:36 localhost podman[82561]: 2025-11-28 08:30:36.950340155 +0000 UTC m=+0.065409940 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, release=1761123044) Nov 28 03:30:37 localhost podman[82561]: 2025-11-28 08:30:37.327516992 +0000 UTC m=+0.442586817 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044) Nov 28 03:30:37 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:30:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:30:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:30:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:30:40 localhost podman[82644]: 2025-11-28 08:30:40.014163131 +0000 UTC m=+0.118775761 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:30:40 localhost podman[82642]: 2025-11-28 08:30:39.976796799 +0000 UTC m=+0.084228525 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:30:40 localhost podman[82643]: 2025-11-28 08:30:40.034643096 +0000 UTC m=+0.141133674 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible) Nov 28 03:30:40 localhost podman[82644]: 2025-11-28 08:30:40.052446161 +0000 UTC m=+0.157058771 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:30:40 localhost podman[82642]: 2025-11-28 08:30:40.063452327 +0000 UTC m=+0.170884033 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 03:30:40 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:30:40 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:30:40 localhost podman[82643]: 2025-11-28 08:30:40.109507944 +0000 UTC m=+0.215998522 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Nov 28 03:30:40 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:30:52 localhost python3[82729]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname Nov 28 03:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:31:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:31:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:31:02 localhost recover_tripleo_nova_virtqemud[82753]: 62642 Nov 28 03:31:02 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:31:02 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:31:02 localhost podman[82742]: 2025-11-28 08:31:02.993882889 +0000 UTC m=+0.089510267 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Nov 28 03:31:03 localhost podman[82742]: 2025-11-28 08:31:03.006337709 +0000 UTC m=+0.101965067 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:31:03 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:31:03 localhost podman[82741]: 2025-11-28 08:31:03.094550035 +0000 UTC m=+0.195445914 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:31:03 localhost podman[82741]: 2025-11-28 08:31:03.299362324 +0000 UTC m=+0.400258213 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Nov 28 03:31:03 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:31:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:31:06 localhost podman[82824]: 2025-11-28 08:31:06.980990102 +0000 UTC m=+0.088464824 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, vcs-type=git) Nov 28 03:31:06 localhost podman[82824]: 2025-11-28 08:31:06.990325948 +0000 UTC m=+0.097800640 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.) Nov 28 03:31:07 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:31:07 localhost systemd[1]: tmp-crun.zUGQXa.mount: Deactivated successfully. Nov 28 03:31:07 localhost podman[82826]: 2025-11-28 08:31:07.047608728 +0000 UTC m=+0.146745085 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, container_name=iscsid, version=17.1.12, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container) Nov 28 03:31:07 localhost podman[82826]: 2025-11-28 08:31:07.062001238 +0000 UTC m=+0.161137625 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:31:07 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:31:07 localhost systemd[1]: tmp-crun.bokU2O.mount: Deactivated successfully. Nov 28 03:31:07 localhost podman[82832]: 2025-11-28 08:31:07.150225184 +0000 UTC m=+0.247095173 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_compute, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=) Nov 28 03:31:07 localhost podman[82832]: 2025-11-28 08:31:07.182445819 +0000 UTC m=+0.279315818 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container) Nov 28 03:31:07 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:31:07 localhost podman[82825]: 2025-11-28 08:31:07.186986138 +0000 UTC m=+0.290660094 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:12:45Z, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:31:07 localhost podman[82825]: 2025-11-28 08:31:07.266640442 +0000 UTC m=+0.370314388 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:31:07 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:31:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:31:07 localhost podman[82917]: 2025-11-28 08:31:07.966288504 +0000 UTC m=+0.070885767 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:31:08 localhost podman[82917]: 2025-11-28 08:31:08.351451525 +0000 UTC m=+0.456048788 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Nov 28 03:31:08 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:31:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:31:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:31:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:31:10 localhost podman[82944]: 2025-11-28 08:31:10.982147535 +0000 UTC m=+0.086632448 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:31:11 localhost podman[82942]: 2025-11-28 08:31:11.032021919 +0000 UTC m=+0.140328210 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:31:11 localhost podman[82942]: 2025-11-28 08:31:11.080555163 +0000 UTC m=+0.188861424 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, container_name=ovn_controller, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:31:11 localhost podman[82943]: 2025-11-28 08:31:11.095248232 +0000 UTC m=+0.200749487 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container) Nov 28 03:31:11 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:31:11 localhost podman[82944]: 2025-11-28 08:31:11.108790346 +0000 UTC m=+0.213275269 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4) Nov 28 03:31:11 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:31:11 localhost podman[82943]: 2025-11-28 08:31:11.151323215 +0000 UTC m=+0.256824530 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=) Nov 28 03:31:11 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:31:11 localhost systemd[1]: tmp-crun.TgrWGw.mount: Deactivated successfully. Nov 28 03:31:29 localhost systemd[1]: tmp-crun.eTtv4t.mount: Deactivated successfully. Nov 28 03:31:29 localhost podman[83117]: 2025-11-28 08:31:29.906832197 +0000 UTC m=+0.101994258 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public) Nov 28 03:31:30 localhost podman[83117]: 2025-11-28 08:31:30.009536506 +0000 UTC m=+0.204698557 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, version=7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph) Nov 28 03:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:31:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:31:34 localhost podman[83261]: 2025-11-28 08:31:34.047057908 +0000 UTC m=+0.155248776 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_id=tripleo_step1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 28 03:31:34 localhost podman[83262]: 2025-11-28 08:31:34.112032574 +0000 UTC m=+0.220325415 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:31:34 localhost podman[83262]: 2025-11-28 08:31:34.122606947 +0000 UTC m=+0.230899778 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:31:34 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:31:34 localhost podman[83261]: 2025-11-28 08:31:34.269466515 +0000 UTC m=+0.377657343 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1) Nov 28 03:31:34 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:31:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:31:37 localhost systemd[1]: tmp-crun.AFuGJl.mount: Deactivated successfully. Nov 28 03:31:38 localhost podman[83311]: 2025-11-28 08:31:38.041870667 +0000 UTC m=+0.140827084 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:31:38 localhost podman[83309]: 2025-11-28 08:31:38.086219183 +0000 UTC m=+0.191263267 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible) Nov 28 03:31:38 localhost podman[83310]: 2025-11-28 08:31:38.140039528 +0000 UTC m=+0.242315277 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true) Nov 28 03:31:38 localhost podman[83310]: 2025-11-28 08:31:38.153545411 +0000 UTC m=+0.255821070 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, release=1761123044, tcib_managed=true, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1) Nov 28 03:31:38 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:31:38 localhost podman[83311]: 2025-11-28 08:31:38.165790284 +0000 UTC m=+0.264746641 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z) Nov 28 03:31:38 localhost podman[83309]: 2025-11-28 08:31:38.166056132 +0000 UTC m=+0.271100206 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:31:38 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:31:38 localhost podman[83308]: 2025-11-28 08:31:38.006782685 +0000 UTC m=+0.114311584 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, tcib_managed=true, container_name=logrotate_crond, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:31:38 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:31:38 localhost podman[83308]: 2025-11-28 08:31:38.237553468 +0000 UTC m=+0.345082307 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public) Nov 28 03:31:38 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:31:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:31:38 localhost podman[83397]: 2025-11-28 08:31:38.967762184 +0000 UTC m=+0.078654974 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 28 03:31:39 localhost podman[83397]: 2025-11-28 08:31:39.368666987 +0000 UTC m=+0.479559757 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:31:39 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:31:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:31:41 localhost systemd[1]: tmp-crun.bBODCS.mount: Deactivated successfully. Nov 28 03:31:41 localhost podman[83424]: 2025-11-28 08:31:41.9821819 +0000 UTC m=+0.084713479 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:31:42 localhost podman[83423]: 2025-11-28 08:31:41.957011611 +0000 UTC m=+0.065112470 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, config_id=tripleo_step5, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:31:42 localhost podman[83422]: 2025-11-28 08:31:42.030206299 +0000 UTC m=+0.134676267 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Nov 28 03:31:42 localhost podman[83423]: 2025-11-28 08:31:42.046969831 +0000 UTC m=+0.155070740 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_compute, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Nov 28 03:31:42 localhost podman[83422]: 2025-11-28 08:31:42.053918753 +0000 UTC m=+0.158388711 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:31:42 localhost podman[83424]: 2025-11-28 08:31:42.054315715 +0000 UTC m=+0.156847294 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z) Nov 28 03:31:42 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:31:42 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:31:42 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:31:53 localhost systemd[1]: session-34.scope: Deactivated successfully. Nov 28 03:31:53 localhost systemd[1]: session-34.scope: Consumed 18.897s CPU time. Nov 28 03:31:53 localhost systemd-logind[763]: Session 34 logged out. Waiting for processes to exit. Nov 28 03:31:53 localhost systemd-logind[763]: Removed session 34. Nov 28 03:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:32:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:32:04 localhost systemd[1]: tmp-crun.mJOFaM.mount: Deactivated successfully. Nov 28 03:32:04 localhost podman[83517]: 2025-11-28 08:32:04.987792314 +0000 UTC m=+0.091984672 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:32:05 localhost podman[83518]: 2025-11-28 08:32:05.107324526 +0000 UTC m=+0.207786551 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, version=17.1.12, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-type=git, distribution-scope=public) Nov 28 03:32:05 localhost podman[83518]: 2025-11-28 08:32:05.119445868 +0000 UTC m=+0.219907943 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, version=17.1.12, vcs-type=git, name=rhosp17/openstack-collectd, config_id=tripleo_step3) Nov 28 03:32:05 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:32:05 localhost podman[83517]: 2025-11-28 08:32:05.165318799 +0000 UTC m=+0.269511167 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z) Nov 28 03:32:05 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:32:05 localhost systemd[1]: tmp-crun.IAAnNu.mount: Deactivated successfully. Nov 28 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:32:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:32:08 localhost podman[83591]: 2025-11-28 08:32:08.993933358 +0000 UTC m=+0.102040160 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:32:09 localhost podman[83591]: 2025-11-28 08:32:09.032985392 +0000 UTC m=+0.141092224 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:32:09 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:32:09 localhost podman[83592]: 2025-11-28 08:32:09.089408996 +0000 UTC m=+0.195302610 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z) Nov 28 03:32:09 localhost podman[83594]: 2025-11-28 08:32:09.039663275 +0000 UTC m=+0.143684162 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64) Nov 28 03:32:09 localhost podman[83592]: 2025-11-28 08:32:09.146253113 +0000 UTC m=+0.252146687 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Nov 28 03:32:09 localhost podman[83593]: 2025-11-28 08:32:09.153160735 +0000 UTC m=+0.256321666 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 28 03:32:09 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:32:09 localhost podman[83594]: 2025-11-28 08:32:09.17234098 +0000 UTC m=+0.276361857 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:32:09 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:32:09 localhost podman[83593]: 2025-11-28 08:32:09.189572117 +0000 UTC m=+0.292733058 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true) Nov 28 03:32:09 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:32:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:32:09 localhost podman[83682]: 2025-11-28 08:32:09.971753592 +0000 UTC m=+0.079151301 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:32:10 localhost podman[83682]: 2025-11-28 08:32:10.324378999 +0000 UTC m=+0.431776628 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:32:10 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:32:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:32:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:32:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:32:12 localhost systemd[1]: tmp-crun.tBlu43.mount: Deactivated successfully. Nov 28 03:32:12 localhost podman[83705]: 2025-11-28 08:32:12.992662896 +0000 UTC m=+0.103666459 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 03:32:13 localhost podman[83705]: 2025-11-28 08:32:13.017016551 +0000 UTC m=+0.128020114 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, container_name=ovn_controller, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:32:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:32:13 localhost podman[83707]: 2025-11-28 08:32:13.028933115 +0000 UTC m=+0.130755348 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, tcib_managed=true, container_name=ovn_metadata_agent, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=) Nov 28 03:32:13 localhost podman[83707]: 2025-11-28 08:32:13.097544932 +0000 UTC m=+0.199367095 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:32:13 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:32:13 localhost podman[83706]: 2025-11-28 08:32:13.105146504 +0000 UTC m=+0.209153243 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, version=17.1.12) Nov 28 03:32:13 localhost podman[83706]: 2025-11-28 08:32:13.188337486 +0000 UTC m=+0.292344235 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step5, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Nov 28 03:32:13 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:32:31 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:32:31 localhost recover_tripleo_nova_virtqemud[83794]: 62642 Nov 28 03:32:31 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:32:31 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:32:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:32:35 localhost podman[83857]: 2025-11-28 08:32:35.997286773 +0000 UTC m=+0.096928183 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:32:36 localhost systemd[1]: tmp-crun.s4PqyK.mount: Deactivated successfully. Nov 28 03:32:36 localhost podman[83858]: 2025-11-28 08:32:36.06316075 +0000 UTC m=+0.162060327 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 28 03:32:36 localhost podman[83858]: 2025-11-28 08:32:36.110462245 +0000 UTC m=+0.209361752 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, version=17.1.12) Nov 28 03:32:36 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:32:36 localhost podman[83857]: 2025-11-28 08:32:36.226521487 +0000 UTC m=+0.326162907 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:32:36 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:32:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:32:39 localhost systemd[1]: tmp-crun.5kRtrp.mount: Deactivated successfully. Nov 28 03:32:40 localhost podman[83913]: 2025-11-28 08:32:40.004896149 +0000 UTC m=+0.105858168 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com) Nov 28 03:32:40 localhost podman[83905]: 2025-11-28 08:32:39.965782825 +0000 UTC m=+0.079541208 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc.) Nov 28 03:32:40 localhost podman[83906]: 2025-11-28 08:32:40.028864636 +0000 UTC m=+0.136080748 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:32:40 localhost podman[83905]: 2025-11-28 08:32:40.050172972 +0000 UTC m=+0.163931375 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:32:40 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:32:40 localhost podman[83913]: 2025-11-28 08:32:40.101390627 +0000 UTC m=+0.202352676 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4) Nov 28 03:32:40 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:32:40 localhost podman[83906]: 2025-11-28 08:32:40.15510879 +0000 UTC m=+0.262324952 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com) Nov 28 03:32:40 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:32:40 localhost podman[83910]: 2025-11-28 08:32:40.240987473 +0000 UTC m=+0.344287624 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1761123044, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 03:32:40 localhost podman[83910]: 2025-11-28 08:32:40.274970268 +0000 UTC m=+0.378270419 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:32:40 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:32:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:32:40 localhost systemd[1]: tmp-crun.2EAtwl.mount: Deactivated successfully. Nov 28 03:32:40 localhost podman[83995]: 2025-11-28 08:32:40.960335616 +0000 UTC m=+0.073025089 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, version=17.1.12, config_id=tripleo_step4, container_name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:32:41 localhost podman[83995]: 2025-11-28 08:32:41.330150734 +0000 UTC m=+0.442840187 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Nov 28 03:32:41 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:32:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:32:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:32:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:32:43 localhost systemd[1]: tmp-crun.VArWLH.mount: Deactivated successfully. Nov 28 03:32:43 localhost podman[84020]: 2025-11-28 08:32:43.996047537 +0000 UTC m=+0.095801598 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:32:44 localhost podman[84018]: 2025-11-28 08:32:44.044287511 +0000 UTC m=+0.149044956 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, release=1761123044) Nov 28 03:32:44 localhost podman[84020]: 2025-11-28 08:32:44.073539851 +0000 UTC m=+0.173293912 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, architecture=x86_64, vcs-type=git, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 28 03:32:44 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:32:44 localhost podman[84019]: 2025-11-28 08:32:44.091353199 +0000 UTC m=+0.193527645 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=) Nov 28 03:32:44 localhost podman[84018]: 2025-11-28 08:32:44.098747977 +0000 UTC m=+0.203505402 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_controller, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-ovn-controller, distribution-scope=public, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 28 03:32:44 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:32:44 localhost podman[84019]: 2025-11-28 08:32:44.145506506 +0000 UTC m=+0.247680942 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1) Nov 28 03:32:44 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:33:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:33:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:33:06 localhost systemd[1]: tmp-crun.EzsEqG.mount: Deactivated successfully. Nov 28 03:33:07 localhost podman[84113]: 2025-11-28 08:33:06.999257418 +0000 UTC m=+0.084912295 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd) Nov 28 03:33:07 localhost podman[84112]: 2025-11-28 08:33:07.019399367 +0000 UTC m=+0.122111158 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-type=git) Nov 28 03:33:07 localhost podman[84113]: 2025-11-28 08:33:07.08256807 +0000 UTC m=+0.168222907 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step3, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, managed_by=tripleo_ansible) Nov 28 03:33:07 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:33:07 localhost podman[84112]: 2025-11-28 08:33:07.203524902 +0000 UTC m=+0.306236743 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true) Nov 28 03:33:07 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:33:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:33:10 localhost podman[84188]: 2025-11-28 08:33:10.973253108 +0000 UTC m=+0.079458085 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:33:11 localhost podman[84188]: 2025-11-28 08:33:11.012489395 +0000 UTC m=+0.118694412 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com) Nov 28 03:33:11 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:33:11 localhost podman[84189]: 2025-11-28 08:33:11.028323653 +0000 UTC m=+0.132523159 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:33:11 localhost podman[84187]: 2025-11-28 08:33:11.078436755 +0000 UTC m=+0.185604782 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public) Nov 28 03:33:11 localhost podman[84189]: 2025-11-28 08:33:11.0834871 +0000 UTC m=+0.187686656 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:33:11 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:33:11 localhost podman[84186]: 2025-11-28 08:33:11.129659901 +0000 UTC m=+0.237632233 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 28 03:33:11 localhost podman[84186]: 2025-11-28 08:33:11.135562482 +0000 UTC m=+0.243534844 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc.) Nov 28 03:33:11 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:33:11 localhost podman[84187]: 2025-11-28 08:33:11.191174703 +0000 UTC m=+0.298342730 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., architecture=x86_64, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, release=1761123044) Nov 28 03:33:11 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:33:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:33:11 localhost podman[84276]: 2025-11-28 08:33:11.973444062 +0000 UTC m=+0.080326993 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:33:12 localhost podman[84276]: 2025-11-28 08:33:12.357262741 +0000 UTC m=+0.464145662 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:33:12 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:33:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:33:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:33:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:33:14 localhost podman[84301]: 2025-11-28 08:33:14.975216111 +0000 UTC m=+0.079807077 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-compute-container) Nov 28 03:33:15 localhost systemd[1]: tmp-crun.EI6kHC.mount: Deactivated successfully. Nov 28 03:33:15 localhost podman[84300]: 2025-11-28 08:33:15.040828009 +0000 UTC m=+0.147361414 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, container_name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4) Nov 28 03:33:15 localhost podman[84301]: 2025-11-28 08:33:15.055225282 +0000 UTC m=+0.159816288 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Nov 28 03:33:15 localhost podman[84302]: 2025-11-28 08:33:15.086807544 +0000 UTC m=+0.185213850 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public) Nov 28 03:33:15 localhost podman[84300]: 2025-11-28 08:33:15.091485668 +0000 UTC m=+0.198019033 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1) Nov 28 03:33:15 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:33:15 localhost podman[84302]: 2025-11-28 08:33:15.126398682 +0000 UTC m=+0.224805018 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 03:33:15 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:33:15 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:33:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:33:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:33:37 localhost systemd[1]: tmp-crun.sZ2fdM.mount: Deactivated successfully. Nov 28 03:33:37 localhost podman[84448]: 2025-11-28 08:33:37.9878454 +0000 UTC m=+0.092350612 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com) Nov 28 03:33:38 localhost podman[84449]: 2025-11-28 08:33:38.035162236 +0000 UTC m=+0.138226723 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Nov 28 03:33:38 localhost podman[84449]: 2025-11-28 08:33:38.049631411 +0000 UTC m=+0.152695828 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, container_name=collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd) Nov 28 03:33:38 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:33:38 localhost podman[84448]: 2025-11-28 08:33:38.212928086 +0000 UTC m=+0.317433258 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:33:38 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:33:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:33:41 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:33:41 localhost recover_tripleo_nova_virtqemud[84523]: 62642 Nov 28 03:33:41 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:33:41 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:33:42 localhost podman[84497]: 2025-11-28 08:33:42.013243633 +0000 UTC m=+0.118265999 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com) Nov 28 03:33:42 localhost podman[84497]: 2025-11-28 08:33:42.023700395 +0000 UTC m=+0.128722741 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc.) Nov 28 03:33:42 localhost podman[84498]: 2025-11-28 08:33:42.027807042 +0000 UTC m=+0.132335993 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:33:42 localhost podman[84498]: 2025-11-28 08:33:42.062492219 +0000 UTC m=+0.167021220 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Nov 28 03:33:42 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:33:42 localhost podman[84499]: 2025-11-28 08:33:42.081398181 +0000 UTC m=+0.179739962 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid) Nov 28 03:33:42 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:33:42 localhost podman[84499]: 2025-11-28 08:33:42.095428662 +0000 UTC m=+0.193770473 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, release=1761123044, container_name=iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public) Nov 28 03:33:42 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:33:42 localhost podman[84500]: 2025-11-28 08:33:42.191699324 +0000 UTC m=+0.287158376 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, container_name=ceilometer_agent_compute) Nov 28 03:33:42 localhost podman[84500]: 2025-11-28 08:33:42.245503739 +0000 UTC m=+0.340962821 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4) Nov 28 03:33:42 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:33:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:33:42 localhost podman[84591]: 2025-11-28 08:33:42.96378889 +0000 UTC m=+0.075064121 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:33:43 localhost podman[84591]: 2025-11-28 08:33:43.322527668 +0000 UTC m=+0.433802899 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=nova_migration_target) Nov 28 03:33:43 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:33:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:33:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:33:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:33:45 localhost podman[84615]: 2025-11-28 08:33:45.979897527 +0000 UTC m=+0.086402029 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4) Nov 28 03:33:46 localhost podman[84616]: 2025-11-28 08:33:46.035035384 +0000 UTC m=+0.139258486 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, container_name=nova_compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Nov 28 03:33:46 localhost podman[84615]: 2025-11-28 08:33:46.088553641 +0000 UTC m=+0.195058173 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 28 03:33:46 localhost podman[84616]: 2025-11-28 08:33:46.091411269 +0000 UTC m=+0.195634341 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_compute) Nov 28 03:33:46 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:33:46 localhost podman[84617]: 2025-11-28 08:33:46.090686927 +0000 UTC m=+0.191249375 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:33:46 localhost podman[84617]: 2025-11-28 08:33:46.172570076 +0000 UTC m=+0.273132544 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:33:46 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:33:46 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:33:46 localhost systemd[1]: tmp-crun.SBZ1l2.mount: Deactivated successfully. Nov 28 03:34:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:34:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:34:08 localhost systemd[1]: tmp-crun.kNSsOF.mount: Deactivated successfully. Nov 28 03:34:08 localhost podman[84732]: 2025-11-28 08:34:08.981414496 +0000 UTC m=+0.092057263 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, architecture=x86_64, release=1761123044, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Nov 28 03:34:09 localhost podman[84733]: 2025-11-28 08:34:09.026831693 +0000 UTC m=+0.133252660 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, distribution-scope=public, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true) Nov 28 03:34:09 localhost podman[84733]: 2025-11-28 08:34:09.036209033 +0000 UTC m=+0.142629970 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Nov 28 03:34:09 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:34:09 localhost podman[84732]: 2025-11-28 08:34:09.16746563 +0000 UTC m=+0.278108357 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:34:09 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:34:11 localhost sshd[84782]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:34:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:34:13 localhost systemd[1]: tmp-crun.cVp8kB.mount: Deactivated successfully. Nov 28 03:34:13 localhost podman[84785]: 2025-11-28 08:34:13.035269835 +0000 UTC m=+0.133313732 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi) Nov 28 03:34:13 localhost podman[84786]: 2025-11-28 08:34:13.098942934 +0000 UTC m=+0.190639826 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:34:13 localhost podman[84786]: 2025-11-28 08:34:13.112404648 +0000 UTC m=+0.204101590 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, tcib_managed=true, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Nov 28 03:34:13 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:34:13 localhost podman[84790]: 2025-11-28 08:34:13.15601465 +0000 UTC m=+0.243827013 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, io.openshift.expose-services=) Nov 28 03:34:13 localhost podman[84784]: 2025-11-28 08:34:13.06726206 +0000 UTC m=+0.168447015 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, container_name=logrotate_crond, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Nov 28 03:34:13 localhost podman[84785]: 2025-11-28 08:34:13.174734456 +0000 UTC m=+0.272778363 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:34:13 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:34:13 localhost podman[84784]: 2025-11-28 08:34:13.200458137 +0000 UTC m=+0.301643042 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond) Nov 28 03:34:13 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:34:13 localhost podman[84790]: 2025-11-28 08:34:13.215127168 +0000 UTC m=+0.302939571 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:34:13 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:34:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:34:13 localhost podman[84877]: 2025-11-28 08:34:13.978605859 +0000 UTC m=+0.088161254 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, version=17.1.12, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:34:14 localhost systemd[1]: tmp-crun.jzmiQ9.mount: Deactivated successfully. Nov 28 03:34:14 localhost podman[84877]: 2025-11-28 08:34:14.36053848 +0000 UTC m=+0.470093865 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:34:14 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:34:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:34:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:34:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:34:16 localhost systemd[1]: tmp-crun.QqCI79.mount: Deactivated successfully. Nov 28 03:34:16 localhost podman[84901]: 2025-11-28 08:34:16.992970004 +0000 UTC m=+0.092783925 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Nov 28 03:34:17 localhost systemd[1]: tmp-crun.raA8dO.mount: Deactivated successfully. Nov 28 03:34:17 localhost podman[84900]: 2025-11-28 08:34:17.04647642 +0000 UTC m=+0.146785456 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team) Nov 28 03:34:17 localhost podman[84902]: 2025-11-28 08:34:17.093486597 +0000 UTC m=+0.186767588 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:34:17 localhost podman[84901]: 2025-11-28 08:34:17.101301098 +0000 UTC m=+0.201114979 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team) Nov 28 03:34:17 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:34:17 localhost podman[84900]: 2025-11-28 08:34:17.116617438 +0000 UTC m=+0.216926464 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:34:17 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:34:17 localhost podman[84902]: 2025-11-28 08:34:17.159695724 +0000 UTC m=+0.252976755 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent) Nov 28 03:34:17 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:34:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:34:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:34:40 localhost podman[85049]: 2025-11-28 08:34:40.016123307 +0000 UTC m=+0.123491999 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, version=17.1.12, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:34:40 localhost podman[85048]: 2025-11-28 08:34:40.035230075 +0000 UTC m=+0.144867137 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, release=1761123044, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64) Nov 28 03:34:40 localhost podman[85049]: 2025-11-28 08:34:40.036874856 +0000 UTC m=+0.144243548 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, tcib_managed=true, container_name=collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd) Nov 28 03:34:40 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:34:40 localhost podman[85048]: 2025-11-28 08:34:40.229937406 +0000 UTC m=+0.339574448 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git) Nov 28 03:34:40 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:34:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:34:43 localhost podman[85098]: 2025-11-28 08:34:43.97812827 +0000 UTC m=+0.085802551 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:34:43 localhost podman[85098]: 2025-11-28 08:34:43.9917807 +0000 UTC m=+0.099455041 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, url=https://www.redhat.com, container_name=logrotate_crond, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, architecture=x86_64) Nov 28 03:34:44 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:34:44 localhost systemd[1]: tmp-crun.Nlrf8p.mount: Deactivated successfully. Nov 28 03:34:44 localhost podman[85101]: 2025-11-28 08:34:44.054660285 +0000 UTC m=+0.153822854 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc.) Nov 28 03:34:44 localhost podman[85101]: 2025-11-28 08:34:44.082668697 +0000 UTC m=+0.181831266 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Nov 28 03:34:44 localhost podman[85099]: 2025-11-28 08:34:44.098259026 +0000 UTC m=+0.202532413 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public) Nov 28 03:34:44 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:34:44 localhost podman[85099]: 2025-11-28 08:34:44.131467358 +0000 UTC m=+0.235740735 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044) Nov 28 03:34:44 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:34:44 localhost podman[85100]: 2025-11-28 08:34:44.148886594 +0000 UTC m=+0.249538449 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:34:44 localhost podman[85100]: 2025-11-28 08:34:44.188427841 +0000 UTC m=+0.289079656 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, container_name=iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:34:44 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:34:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:34:44 localhost podman[85189]: 2025-11-28 08:34:44.963448406 +0000 UTC m=+0.075544926 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:34:45 localhost podman[85189]: 2025-11-28 08:34:45.323086082 +0000 UTC m=+0.435182612 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:34:45 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:34:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:34:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:34:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:34:47 localhost podman[85212]: 2025-11-28 08:34:47.962182781 +0000 UTC m=+0.073040278 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:34:48 localhost podman[85214]: 2025-11-28 08:34:48.007466844 +0000 UTC m=+0.111571194 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 03:34:48 localhost podman[85212]: 2025-11-28 08:34:48.018483003 +0000 UTC m=+0.129340490 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4) Nov 28 03:34:48 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:34:48 localhost podman[85214]: 2025-11-28 08:34:48.072782204 +0000 UTC m=+0.176886614 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.buildah.version=1.41.4, container_name=ovn_metadata_agent) Nov 28 03:34:48 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:34:48 localhost podman[85213]: 2025-11-28 08:34:48.077722446 +0000 UTC m=+0.183884859 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Nov 28 03:34:48 localhost podman[85213]: 2025-11-28 08:34:48.161648568 +0000 UTC m=+0.267810991 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, release=1761123044, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team) Nov 28 03:34:48 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:35:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:35:03 localhost recover_tripleo_nova_virtqemud[85285]: 62642 Nov 28 03:35:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:35:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:35:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:35:10 localhost podman[85331]: 2025-11-28 08:35:10.992393021 +0000 UTC m=+0.092917149 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1) Nov 28 03:35:11 localhost systemd[1]: tmp-crun.kRvbaH.mount: Deactivated successfully. Nov 28 03:35:11 localhost podman[85332]: 2025-11-28 08:35:11.048132596 +0000 UTC m=+0.147365355 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:35:11 localhost podman[85332]: 2025-11-28 08:35:11.060418524 +0000 UTC m=+0.159651263 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, container_name=collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3) Nov 28 03:35:11 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:35:11 localhost podman[85331]: 2025-11-28 08:35:11.224135731 +0000 UTC m=+0.324659929 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 03:35:11 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:35:14 localhost systemd[1]: tmp-crun.fYgGdF.mount: Deactivated successfully. Nov 28 03:35:14 localhost podman[85383]: 2025-11-28 08:35:14.980189816 +0000 UTC m=+0.084999816 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team) Nov 28 03:35:15 localhost podman[85383]: 2025-11-28 08:35:15.016533874 +0000 UTC m=+0.121343914 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, version=17.1.12) Nov 28 03:35:15 localhost systemd[1]: tmp-crun.L7EoRN.mount: Deactivated successfully. Nov 28 03:35:15 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:35:15 localhost podman[85380]: 2025-11-28 08:35:15.040379998 +0000 UTC m=+0.152448422 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:35:15 localhost podman[85380]: 2025-11-28 08:35:15.075826908 +0000 UTC m=+0.187895312 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:35:15 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:35:15 localhost podman[85382]: 2025-11-28 08:35:15.088480638 +0000 UTC m=+0.194138714 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, release=1761123044, container_name=iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:35:15 localhost podman[85382]: 2025-11-28 08:35:15.105376088 +0000 UTC m=+0.211034194 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1) Nov 28 03:35:15 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:35:15 localhost podman[85381]: 2025-11-28 08:35:15.125683632 +0000 UTC m=+0.232992209 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 28 03:35:15 localhost podman[85381]: 2025-11-28 08:35:15.183546153 +0000 UTC m=+0.290854760 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, tcib_managed=true) Nov 28 03:35:15 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:35:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:35:15 localhost podman[85472]: 2025-11-28 08:35:15.96935648 +0000 UTC m=+0.078522486 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:35:16 localhost podman[85472]: 2025-11-28 08:35:16.339882391 +0000 UTC m=+0.449048377 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64) Nov 28 03:35:16 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:35:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:35:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:35:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:35:18 localhost systemd[1]: tmp-crun.g4Sce6.mount: Deactivated successfully. Nov 28 03:35:18 localhost podman[85495]: 2025-11-28 08:35:18.988154142 +0000 UTC m=+0.094461347 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, architecture=x86_64, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 28 03:35:19 localhost podman[85496]: 2025-11-28 08:35:19.037212191 +0000 UTC m=+0.140921176 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, tcib_managed=true, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:35:19 localhost podman[85497]: 2025-11-28 08:35:19.088893582 +0000 UTC m=+0.191544984 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:35:19 localhost podman[85496]: 2025-11-28 08:35:19.116289335 +0000 UTC m=+0.219998390 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:35:19 localhost podman[85495]: 2025-11-28 08:35:19.11743932 +0000 UTC m=+0.223746545 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T23:34:05Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Nov 28 03:35:19 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:35:19 localhost podman[85497]: 2025-11-28 08:35:19.160843496 +0000 UTC m=+0.263494818 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team) Nov 28 03:35:19 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:35:19 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:35:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:35:41 localhost podman[85649]: 2025-11-28 08:35:41.99132657 +0000 UTC m=+0.096181170 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044) Nov 28 03:35:42 localhost systemd[1]: tmp-crun.cZRn8M.mount: Deactivated successfully. Nov 28 03:35:42 localhost podman[85650]: 2025-11-28 08:35:42.055223896 +0000 UTC m=+0.155343760 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd) Nov 28 03:35:42 localhost podman[85650]: 2025-11-28 08:35:42.065676098 +0000 UTC m=+0.165795912 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Nov 28 03:35:42 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:35:42 localhost podman[85649]: 2025-11-28 08:35:42.20777048 +0000 UTC m=+0.312625100 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc.) Nov 28 03:35:42 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:35:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:35:45 localhost podman[85697]: 2025-11-28 08:35:45.977508797 +0000 UTC m=+0.080193138 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, release=1761123044, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1) Nov 28 03:35:46 localhost podman[85697]: 2025-11-28 08:35:46.01042757 +0000 UTC m=+0.113111871 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:35:46 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:35:46 localhost systemd[1]: tmp-crun.QYGgRW.mount: Deactivated successfully. Nov 28 03:35:46 localhost podman[85698]: 2025-11-28 08:35:46.088164252 +0000 UTC m=+0.187022155 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:35:46 localhost podman[85698]: 2025-11-28 08:35:46.118380442 +0000 UTC m=+0.217238365 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi) Nov 28 03:35:46 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:35:46 localhost systemd[1]: tmp-crun.DNY3wl.mount: Deactivated successfully. Nov 28 03:35:46 localhost podman[85699]: 2025-11-28 08:35:46.196648429 +0000 UTC m=+0.294517062 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Nov 28 03:35:46 localhost podman[85700]: 2025-11-28 08:35:46.202352165 +0000 UTC m=+0.295798112 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4) Nov 28 03:35:46 localhost podman[85699]: 2025-11-28 08:35:46.209376491 +0000 UTC m=+0.307245184 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.buildah.version=1.41.4, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible) Nov 28 03:35:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:35:46 localhost podman[85700]: 2025-11-28 08:35:46.24052818 +0000 UTC m=+0.333974127 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4) Nov 28 03:35:46 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:35:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:35:46 localhost podman[85789]: 2025-11-28 08:35:46.987218344 +0000 UTC m=+0.093307372 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z) Nov 28 03:35:47 localhost podman[85789]: 2025-11-28 08:35:47.353832704 +0000 UTC m=+0.459921682 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z) Nov 28 03:35:47 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:35:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:35:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:35:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:35:49 localhost systemd[1]: tmp-crun.cFLmsq.mount: Deactivated successfully. Nov 28 03:35:49 localhost podman[85814]: 2025-11-28 08:35:49.988288521 +0000 UTC m=+0.093098534 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container) Nov 28 03:35:50 localhost podman[85816]: 2025-11-28 08:35:50.038924559 +0000 UTC m=+0.139869353 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:35:50 localhost podman[85815]: 2025-11-28 08:35:50.096235391 +0000 UTC m=+0.198028291 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 28 03:35:50 localhost podman[85814]: 2025-11-28 08:35:50.115051851 +0000 UTC m=+0.219861834 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller) Nov 28 03:35:50 localhost podman[85815]: 2025-11-28 08:35:50.12641395 +0000 UTC m=+0.228206850 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=) Nov 28 03:35:50 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:35:50 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:35:50 localhost podman[85816]: 2025-11-28 08:35:50.15011444 +0000 UTC m=+0.251059254 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git) Nov 28 03:35:50 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:36:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:36:13 localhost podman[85935]: 2025-11-28 08:36:13.016754628 +0000 UTC m=+0.118046153 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 28 03:36:13 localhost podman[85935]: 2025-11-28 08:36:13.030495131 +0000 UTC m=+0.131786646 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 28 03:36:13 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:36:13 localhost podman[85934]: 2025-11-28 08:36:13.122502032 +0000 UTC m=+0.228366497 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:36:13 localhost podman[85934]: 2025-11-28 08:36:13.315058597 +0000 UTC m=+0.420923062 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4) Nov 28 03:36:13 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:36:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:36:16 localhost podman[85985]: 2025-11-28 08:36:16.971462697 +0000 UTC m=+0.080448827 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=) Nov 28 03:36:16 localhost podman[85985]: 2025-11-28 08:36:16.982645981 +0000 UTC m=+0.091632111 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:36:17 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:36:17 localhost podman[85987]: 2025-11-28 08:36:17.033251278 +0000 UTC m=+0.135933443 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.expose-services=) Nov 28 03:36:17 localhost podman[85987]: 2025-11-28 08:36:17.071502005 +0000 UTC m=+0.174184160 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, container_name=iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:36:17 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:36:17 localhost podman[85986]: 2025-11-28 08:36:17.078617064 +0000 UTC m=+0.182634770 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:36:17 localhost podman[85988]: 2025-11-28 08:36:17.140838578 +0000 UTC m=+0.243293666 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute) Nov 28 03:36:17 localhost podman[85986]: 2025-11-28 08:36:17.164463125 +0000 UTC m=+0.268480831 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:36:17 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:36:17 localhost podman[85988]: 2025-11-28 08:36:17.176477275 +0000 UTC m=+0.278932413 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:36:17 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:36:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:36:18 localhost systemd[1]: tmp-crun.rUwpAh.mount: Deactivated successfully. Nov 28 03:36:18 localhost podman[86077]: 2025-11-28 08:36:18.027335004 +0000 UTC m=+0.133303793 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team) Nov 28 03:36:18 localhost podman[86077]: 2025-11-28 08:36:18.396807992 +0000 UTC m=+0.502776821 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4) Nov 28 03:36:18 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:36:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:36:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:36:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:36:20 localhost podman[86099]: 2025-11-28 08:36:20.987009697 +0000 UTC m=+0.092984572 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller) Nov 28 03:36:21 localhost podman[86099]: 2025-11-28 08:36:21.03488275 +0000 UTC m=+0.140857655 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, container_name=ovn_controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:36:21 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:36:21 localhost podman[86101]: 2025-11-28 08:36:21.045311621 +0000 UTC m=+0.144785105 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:36:21 localhost podman[86100]: 2025-11-28 08:36:21.099994704 +0000 UTC m=+0.202122250 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1) Nov 28 03:36:21 localhost podman[86101]: 2025-11-28 08:36:21.125827869 +0000 UTC m=+0.225301403 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, distribution-scope=public) Nov 28 03:36:21 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:36:21 localhost podman[86100]: 2025-11-28 08:36:21.179368306 +0000 UTC m=+0.281495842 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:36:21 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:36:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:36:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:36:43 localhost systemd[1]: tmp-crun.H5meR1.mount: Deactivated successfully. Nov 28 03:36:43 localhost podman[86249]: 2025-11-28 08:36:43.988943486 +0000 UTC m=+0.091037081 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:36:44 localhost podman[86249]: 2025-11-28 08:36:44.028545154 +0000 UTC m=+0.130638739 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_id=tripleo_step3, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:36:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:36:44 localhost podman[86248]: 2025-11-28 08:36:44.032206358 +0000 UTC m=+0.134719526 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:36:44 localhost podman[86248]: 2025-11-28 08:36:44.224471123 +0000 UTC m=+0.326984301 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Nov 28 03:36:44 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:36:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:36:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:36:47 localhost recover_tripleo_nova_virtqemud[86322]: 62642 Nov 28 03:36:47 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:36:47 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:36:47 localhost podman[86300]: 2025-11-28 08:36:47.986270066 +0000 UTC m=+0.090430053 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:36:48 localhost podman[86299]: 2025-11-28 08:36:48.035352266 +0000 UTC m=+0.141309609 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, tcib_managed=true) Nov 28 03:36:48 localhost podman[86299]: 2025-11-28 08:36:48.046331764 +0000 UTC m=+0.152289167 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, release=1761123044, version=17.1.12, managed_by=tripleo_ansible) Nov 28 03:36:48 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:36:48 localhost podman[86301]: 2025-11-28 08:36:48.088722618 +0000 UTC m=+0.189406599 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:36:48 localhost podman[86301]: 2025-11-28 08:36:48.10242304 +0000 UTC m=+0.203107001 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_id=tripleo_step3, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Nov 28 03:36:48 localhost podman[86302]: 2025-11-28 08:36:48.135140526 +0000 UTC m=+0.232288998 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64) Nov 28 03:36:48 localhost podman[86300]: 2025-11-28 08:36:48.163244571 +0000 UTC m=+0.267404528 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, version=17.1.12, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:36:48 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:36:48 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:36:48 localhost podman[86302]: 2025-11-28 08:36:48.192417999 +0000 UTC m=+0.289566521 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Nov 28 03:36:48 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:36:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:36:48 localhost podman[86394]: 2025-11-28 08:36:48.958261752 +0000 UTC m=+0.071155790 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:36:49 localhost podman[86394]: 2025-11-28 08:36:49.350464239 +0000 UTC m=+0.463358217 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute) Nov 28 03:36:49 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:36:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:36:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:36:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:36:51 localhost podman[86419]: 2025-11-28 08:36:51.978024813 +0000 UTC m=+0.081339683 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:36:52 localhost podman[86419]: 2025-11-28 08:36:52.028036602 +0000 UTC m=+0.131351482 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, vcs-type=git, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:14:25Z) Nov 28 03:36:52 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:36:52 localhost podman[86418]: 2025-11-28 08:36:52.081752225 +0000 UTC m=+0.187491480 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible) Nov 28 03:36:52 localhost podman[86418]: 2025-11-28 08:36:52.109496728 +0000 UTC m=+0.215235953 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:36:52 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:36:52 localhost podman[86417]: 2025-11-28 08:36:52.030268691 +0000 UTC m=+0.138741340 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller) Nov 28 03:36:52 localhost podman[86417]: 2025-11-28 08:36:52.169469503 +0000 UTC m=+0.277942142 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.) Nov 28 03:36:52 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:37:14 localhost systemd[1]: tmp-crun.YB11PN.mount: Deactivated successfully. Nov 28 03:37:14 localhost podman[86538]: 2025-11-28 08:37:14.988765846 +0000 UTC m=+0.095996084 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1) Nov 28 03:37:14 localhost podman[86538]: 2025-11-28 08:37:14.998245807 +0000 UTC m=+0.105476045 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, container_name=collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, tcib_managed=true) Nov 28 03:37:15 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:37:15 localhost podman[86537]: 2025-11-28 08:37:14.96289718 +0000 UTC m=+0.075292547 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:37:15 localhost podman[86537]: 2025-11-28 08:37:15.175007097 +0000 UTC m=+0.287402514 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:37:15 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:37:15 localhost systemd[1]: tmp-crun.enHL1F.mount: Deactivated successfully. Nov 28 03:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:37:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:37:18 localhost systemd[1]: tmp-crun.lempb3.mount: Deactivated successfully. Nov 28 03:37:18 localhost podman[86587]: 2025-11-28 08:37:18.995845746 +0000 UTC m=+0.095817880 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git) Nov 28 03:37:19 localhost podman[86589]: 2025-11-28 08:37:19.015626375 +0000 UTC m=+0.108955424 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 28 03:37:19 localhost podman[86587]: 2025-11-28 08:37:19.046598628 +0000 UTC m=+0.146570722 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:37:19 localhost podman[86589]: 2025-11-28 08:37:19.07009408 +0000 UTC m=+0.163423169 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z) Nov 28 03:37:19 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:37:19 localhost podman[86588]: 2025-11-28 08:37:19.094061098 +0000 UTC m=+0.188400318 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team) Nov 28 03:37:19 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:37:19 localhost podman[86588]: 2025-11-28 08:37:19.131423667 +0000 UTC m=+0.225762897 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, release=1761123044, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 03:37:19 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:37:19 localhost podman[86586]: 2025-11-28 08:37:19.150112243 +0000 UTC m=+0.253052088 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, container_name=logrotate_crond, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4) Nov 28 03:37:19 localhost podman[86586]: 2025-11-28 08:37:19.158260532 +0000 UTC m=+0.261200397 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, tcib_managed=true, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 28 03:37:19 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:37:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:37:19 localhost podman[86675]: 2025-11-28 08:37:19.979205432 +0000 UTC m=+0.081204229 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team) Nov 28 03:37:20 localhost podman[86675]: 2025-11-28 08:37:20.363428563 +0000 UTC m=+0.465427310 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, release=1761123044, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible) Nov 28 03:37:20 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:37:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:37:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:37:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:37:22 localhost systemd[1]: tmp-crun.3GP8Us.mount: Deactivated successfully. Nov 28 03:37:22 localhost podman[86698]: 2025-11-28 08:37:22.985107157 +0000 UTC m=+0.091946560 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64) Nov 28 03:37:23 localhost podman[86700]: 2025-11-28 08:37:23.032104692 +0000 UTC m=+0.132004562 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 28 03:37:23 localhost podman[86698]: 2025-11-28 08:37:23.036523929 +0000 UTC m=+0.143363332 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container) Nov 28 03:37:23 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:37:23 localhost podman[86699]: 2025-11-28 08:37:23.092951245 +0000 UTC m=+0.196482156 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.openshift.expose-services=, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:37:23 localhost podman[86700]: 2025-11-28 08:37:23.107559964 +0000 UTC m=+0.207459844 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Nov 28 03:37:23 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:37:23 localhost podman[86699]: 2025-11-28 08:37:23.124523476 +0000 UTC m=+0.228054427 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:37:23 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:37:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:37:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:37:45 localhost podman[86847]: 2025-11-28 08:37:45.986924425 +0000 UTC m=+0.091369721 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:37:46 localhost podman[86847]: 2025-11-28 08:37:46.001555465 +0000 UTC m=+0.106000791 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:37:46 localhost podman[86846]: 2025-11-28 08:37:46.038720218 +0000 UTC m=+0.143132473 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr) Nov 28 03:37:46 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:37:46 localhost podman[86846]: 2025-11-28 08:37:46.257123638 +0000 UTC m=+0.361535923 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vcs-type=git) Nov 28 03:37:46 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:37:49 localhost systemd[1]: tmp-crun.1qtNwE.mount: Deactivated successfully. Nov 28 03:37:50 localhost podman[86895]: 2025-11-28 08:37:49.999672699 +0000 UTC m=+0.101495063 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com) Nov 28 03:37:50 localhost podman[86894]: 2025-11-28 08:37:49.974496164 +0000 UTC m=+0.079090953 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron) Nov 28 03:37:50 localhost podman[86895]: 2025-11-28 08:37:50.057448326 +0000 UTC m=+0.159270660 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:37:50 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:37:50 localhost podman[86896]: 2025-11-28 08:37:50.033662964 +0000 UTC m=+0.131754823 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:37:50 localhost podman[86894]: 2025-11-28 08:37:50.111477979 +0000 UTC m=+0.216072788 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1) Nov 28 03:37:50 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:37:50 localhost podman[86896]: 2025-11-28 08:37:50.16256131 +0000 UTC m=+0.260653149 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:37:50 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:37:50 localhost podman[86897]: 2025-11-28 08:37:50.245821902 +0000 UTC m=+0.340133415 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4) Nov 28 03:37:50 localhost podman[86897]: 2025-11-28 08:37:50.273465993 +0000 UTC m=+0.367777486 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64) Nov 28 03:37:50 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:37:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:37:50 localhost podman[86980]: 2025-11-28 08:37:50.987961136 +0000 UTC m=+0.092664913 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 28 03:37:51 localhost podman[86980]: 2025-11-28 08:37:51.367233005 +0000 UTC m=+0.471936752 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:37:51 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:37:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:37:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:37:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:37:53 localhost podman[87006]: 2025-11-28 08:37:53.997854893 +0000 UTC m=+0.095989104 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:37:54 localhost podman[87006]: 2025-11-28 08:37:54.042900989 +0000 UTC m=+0.141035070 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:37:54 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:37:54 localhost podman[87004]: 2025-11-28 08:37:54.044462948 +0000 UTC m=+0.146736786 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public) Nov 28 03:37:54 localhost podman[87005]: 2025-11-28 08:37:54.102271956 +0000 UTC m=+0.200459388 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12) Nov 28 03:37:54 localhost podman[87005]: 2025-11-28 08:37:54.132682042 +0000 UTC m=+0.230869494 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.12) Nov 28 03:37:54 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:37:54 localhost podman[87004]: 2025-11-28 08:37:54.183424963 +0000 UTC m=+0.285698791 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12) Nov 28 03:37:54 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:38:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:38:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:38:16 localhost podman[87123]: 2025-11-28 08:38:16.97531582 +0000 UTC m=+0.080007062 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Nov 28 03:38:17 localhost podman[87124]: 2025-11-28 08:38:17.028488876 +0000 UTC m=+0.132640811 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:38:17 localhost podman[87124]: 2025-11-28 08:38:17.036579185 +0000 UTC m=+0.140731090 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, release=1761123044, config_id=tripleo_step3, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:38:17 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:38:17 localhost podman[87123]: 2025-11-28 08:38:17.206099932 +0000 UTC m=+0.310791154 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:38:17 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:38:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:38:20 localhost podman[87170]: 2025-11-28 08:38:20.988651852 +0000 UTC m=+0.092387754 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:38:21 localhost podman[87170]: 2025-11-28 08:38:21.000909288 +0000 UTC m=+0.104645150 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:38:21 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:38:21 localhost systemd[1]: tmp-crun.G0NVCq.mount: Deactivated successfully. Nov 28 03:38:21 localhost podman[87172]: 2025-11-28 08:38:21.055661244 +0000 UTC m=+0.154138024 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:38:21 localhost podman[87172]: 2025-11-28 08:38:21.068542119 +0000 UTC m=+0.167018949 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64) Nov 28 03:38:21 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:38:21 localhost podman[87173]: 2025-11-28 08:38:21.149374686 +0000 UTC m=+0.242268234 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 28 03:38:21 localhost podman[87173]: 2025-11-28 08:38:21.184522618 +0000 UTC m=+0.277416136 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1) Nov 28 03:38:21 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:38:21 localhost podman[87171]: 2025-11-28 08:38:21.198028964 +0000 UTC m=+0.295910516 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12) Nov 28 03:38:21 localhost podman[87171]: 2025-11-28 08:38:21.224393135 +0000 UTC m=+0.322274687 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git) Nov 28 03:38:21 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:38:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:38:21 localhost podman[87261]: 2025-11-28 08:38:21.985039558 +0000 UTC m=+0.090819745 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:38:22 localhost podman[87261]: 2025-11-28 08:38:22.391604218 +0000 UTC m=+0.497384395 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git) Nov 28 03:38:22 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:38:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:38:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:38:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:38:24 localhost podman[87285]: 2025-11-28 08:38:24.980112631 +0000 UTC m=+0.083939344 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:38:25 localhost podman[87284]: 2025-11-28 08:38:25.032061089 +0000 UTC m=+0.139325158 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64) Nov 28 03:38:25 localhost podman[87284]: 2025-11-28 08:38:25.060478073 +0000 UTC m=+0.167742202 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:38:25 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:38:25 localhost podman[87286]: 2025-11-28 08:38:25.078507408 +0000 UTC m=+0.180825615 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, release=1761123044, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:38:25 localhost podman[87285]: 2025-11-28 08:38:25.112125752 +0000 UTC m=+0.215952445 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team) Nov 28 03:38:25 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:38:25 localhost podman[87286]: 2025-11-28 08:38:25.141438904 +0000 UTC m=+0.243757071 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, container_name=ovn_metadata_agent, vendor=Red Hat, Inc.) Nov 28 03:38:25 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:38:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:38:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4784 writes, 637 syncs, 7.51 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 398 writes, 1523 keys, 398 commit groups, 1.0 writes per commit group, ingest: 1.93 MB, 0.00 MB/s#012Interval WAL: 398 writes, 144 syncs, 2.76 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:38:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:38:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 5781 writes, 25K keys, 5781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5781 writes, 729 syncs, 7.93 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 535 writes, 2212 keys, 535 commit groups, 1.0 writes per commit group, ingest: 2.71 MB, 0.00 MB/s#012Interval WAL: 535 writes, 189 syncs, 2.83 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:38:47 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=162.142.125.193 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=33791 DF PROTO=TCP SPT=28136 DPT=19885 SEQ=1257399283 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC96FA29A000000000103030A) Nov 28 03:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:38:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:38:47 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:38:47 localhost recover_tripleo_nova_virtqemud[87437]: 62642 Nov 28 03:38:47 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:38:47 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:38:47 localhost podman[87435]: 2025-11-28 08:38:47.996915529 +0000 UTC m=+0.093951392 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, distribution-scope=public) Nov 28 03:38:48 localhost podman[87435]: 2025-11-28 08:38:48.033843845 +0000 UTC m=+0.130879708 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:38:48 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:38:48 localhost podman[87434]: 2025-11-28 08:38:48.052097916 +0000 UTC m=+0.149861501 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr) Nov 28 03:38:48 localhost podman[87434]: 2025-11-28 08:38:48.25923894 +0000 UTC m=+0.357002485 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible) Nov 28 03:38:48 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:38:48 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=162.142.125.193 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=33792 DF PROTO=TCP SPT=28136 DPT=19885 SEQ=1257399283 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC96FA697000000000103030A) Nov 28 03:38:48 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=162.142.125.193 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=45671 DF PROTO=TCP SPT=28148 DPT=19885 SEQ=2937239539 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC96FA6FA000000000103030A) Nov 28 03:38:49 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=162.142.125.193 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=45672 DF PROTO=TCP SPT=28148 DPT=19885 SEQ=2937239539 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC96FAB17000000000103030A) Nov 28 03:38:49 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=162.142.125.193 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=11619 DF PROTO=TCP SPT=28156 DPT=19885 SEQ=2713186779 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC96FAC80000000000103030A) Nov 28 03:38:50 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=162.142.125.193 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=11620 DF PROTO=TCP SPT=28156 DPT=19885 SEQ=2713186779 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC96FB097000000000103030A) Nov 28 03:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:38:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:38:51 localhost systemd[1]: tmp-crun.OKXNL1.mount: Deactivated successfully. Nov 28 03:38:52 localhost podman[87486]: 2025-11-28 08:38:51.995402124 +0000 UTC m=+0.096971065 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4) Nov 28 03:38:52 localhost podman[87486]: 2025-11-28 08:38:52.030460043 +0000 UTC m=+0.132028994 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi) Nov 28 03:38:52 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:38:52 localhost podman[87487]: 2025-11-28 08:38:52.049017494 +0000 UTC m=+0.144131636 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid) Nov 28 03:38:52 localhost podman[87488]: 2025-11-28 08:38:52.111545248 +0000 UTC m=+0.201736548 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:38:52 localhost podman[87485]: 2025-11-28 08:38:52.084047152 +0000 UTC m=+0.186281233 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.buildah.version=1.41.4, container_name=logrotate_crond, tcib_managed=true) Nov 28 03:38:52 localhost podman[87488]: 2025-11-28 08:38:52.141631423 +0000 UTC m=+0.231822743 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Nov 28 03:38:52 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:38:52 localhost podman[87485]: 2025-11-28 08:38:52.168431638 +0000 UTC m=+0.270665739 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, container_name=logrotate_crond, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Nov 28 03:38:52 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:38:52 localhost podman[87487]: 2025-11-28 08:38:52.187600578 +0000 UTC m=+0.282714740 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git) Nov 28 03:38:52 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:38:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:38:52 localhost podman[87576]: 2025-11-28 08:38:52.966684009 +0000 UTC m=+0.078381703 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Nov 28 03:38:52 localhost systemd[1]: tmp-crun.UDGIAy.mount: Deactivated successfully. Nov 28 03:38:53 localhost podman[87576]: 2025-11-28 08:38:53.328682617 +0000 UTC m=+0.440380281 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:38:53 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:38:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:38:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:38:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:38:55 localhost podman[87599]: 2025-11-28 08:38:55.983204962 +0000 UTC m=+0.088621208 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:38:56 localhost podman[87600]: 2025-11-28 08:38:56.030446215 +0000 UTC m=+0.131980192 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step5, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z) Nov 28 03:38:56 localhost podman[87601]: 2025-11-28 08:38:56.083156687 +0000 UTC m=+0.182936439 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:38:56 localhost podman[87600]: 2025-11-28 08:38:56.113460059 +0000 UTC m=+0.214994036 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, tcib_managed=true) Nov 28 03:38:56 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:38:56 localhost podman[87599]: 2025-11-28 08:38:56.136692054 +0000 UTC m=+0.242108350 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4) Nov 28 03:38:56 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:38:56 localhost podman[87601]: 2025-11-28 08:38:56.158892147 +0000 UTC m=+0.258671859 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, tcib_managed=true) Nov 28 03:38:56 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:39:03 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=167.94.138.32 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=47924 DF PROTO=TCP SPT=53598 DPT=19885 SEQ=1748276739 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AA2304D31000000000103030A) Nov 28 03:39:04 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=167.94.138.32 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=47925 DF PROTO=TCP SPT=53598 DPT=19885 SEQ=1748276739 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AA230513B000000000103030A) Nov 28 03:39:05 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=167.94.138.32 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=21884 DF PROTO=TCP SPT=53610 DPT=19885 SEQ=3538641033 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AA2305766000000000103030A) Nov 28 03:39:07 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=167.94.138.32 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=21885 DF PROTO=TCP SPT=53610 DPT=19885 SEQ=3538641033 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AA2305B7B000000000103030A) Nov 28 03:39:07 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=167.94.138.32 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=24914 DF PROTO=TCP SPT=53624 DPT=19885 SEQ=3640415225 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AA2305BED000000000103030A) Nov 28 03:39:08 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=fa:61:25:a2:5a:71 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=167.94.138.32 DST=38.102.83.53 LEN=60 TOS=0x08 PREC=0x40 TTL=52 ID=24915 DF PROTO=TCP SPT=53624 DPT=19885 SEQ=3640415225 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AA2305FFA000000000103030A) Nov 28 03:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:39:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:39:18 localhost podman[87721]: 2025-11-28 08:39:18.975891979 +0000 UTC m=+0.085029046 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:39:18 localhost podman[87721]: 2025-11-28 08:39:18.983809063 +0000 UTC m=+0.092946110 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Nov 28 03:39:18 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:39:19 localhost systemd[1]: tmp-crun.6jvD6m.mount: Deactivated successfully. Nov 28 03:39:19 localhost podman[87720]: 2025-11-28 08:39:19.040474506 +0000 UTC m=+0.149816510 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com) Nov 28 03:39:19 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=66.132.153.138 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=48645 DF PROTO=TCP SPT=41584 DPT=19885 SEQ=2558183826 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC990113C000000000103030A) Nov 28 03:39:19 localhost podman[87720]: 2025-11-28 08:39:19.256562615 +0000 UTC m=+0.365904639 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:39:19 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:39:20 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=66.132.153.138 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=12011 DF PROTO=TCP SPT=41618 DPT=19885 SEQ=3498461950 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC990152E000000000103030A) Nov 28 03:39:21 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=66.132.153.138 DST=38.102.83.53 LEN=60 TOS=0x00 PREC=0x00 TTL=52 ID=48919 DF PROTO=TCP SPT=41640 DPT=19885 SEQ=2596275895 ACK=0 WINDOW=21900 RES=0x00 SYN URGP=0 OPT (020405B40402080AC9901922000000000103030A) Nov 28 03:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:39:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:39:22 localhost podman[87772]: 2025-11-28 08:39:22.989758518 +0000 UTC m=+0.090473544 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true) Nov 28 03:39:23 localhost podman[87772]: 2025-11-28 08:39:23.018392379 +0000 UTC m=+0.119107405 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64) Nov 28 03:39:23 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:39:23 localhost podman[87771]: 2025-11-28 08:39:23.040340325 +0000 UTC m=+0.142056193 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, tcib_managed=true, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:39:23 localhost podman[87771]: 2025-11-28 08:39:23.050308612 +0000 UTC m=+0.152024500 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:39:23 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:39:23 localhost systemd[1]: tmp-crun.irJBJG.mount: Deactivated successfully. Nov 28 03:39:23 localhost podman[87769]: 2025-11-28 08:39:23.093226692 +0000 UTC m=+0.200982825 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:39:23 localhost podman[87769]: 2025-11-28 08:39:23.104265732 +0000 UTC m=+0.212021855 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.) Nov 28 03:39:23 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:39:23 localhost podman[87770]: 2025-11-28 08:39:23.176325159 +0000 UTC m=+0.282036039 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi) Nov 28 03:39:23 localhost podman[87770]: 2025-11-28 08:39:23.226408809 +0000 UTC m=+0.332119699 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team) Nov 28 03:39:23 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:39:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:39:23 localhost podman[87862]: 2025-11-28 08:39:23.967495541 +0000 UTC m=+0.076938508 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Nov 28 03:39:24 localhost podman[87862]: 2025-11-28 08:39:24.37533965 +0000 UTC m=+0.484782597 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:39:24 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:39:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:39:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:39:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:39:26 localhost podman[87884]: 2025-11-28 08:39:26.97577479 +0000 UTC m=+0.083906042 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:39:27 localhost podman[87885]: 2025-11-28 08:39:27.022761766 +0000 UTC m=+0.126412060 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, url=https://www.redhat.com, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, distribution-scope=public) Nov 28 03:39:27 localhost podman[87884]: 2025-11-28 08:39:27.077856241 +0000 UTC m=+0.185987543 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true) Nov 28 03:39:27 localhost podman[87886]: 2025-11-28 08:39:27.090909683 +0000 UTC m=+0.192600538 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:39:27 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:39:27 localhost podman[87885]: 2025-11-28 08:39:27.104083978 +0000 UTC m=+0.207734272 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:39:27 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:39:27 localhost podman[87886]: 2025-11-28 08:39:27.162494816 +0000 UTC m=+0.264185701 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:39:27 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:39:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:39:49 localhost systemd[1]: tmp-crun.jdMySW.mount: Deactivated successfully. Nov 28 03:39:50 localhost podman[88090]: 2025-11-28 08:39:49.99881449 +0000 UTC m=+0.105894519 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Nov 28 03:39:50 localhost systemd[1]: tmp-crun.8J9AvB.mount: Deactivated successfully. Nov 28 03:39:50 localhost podman[88089]: 2025-11-28 08:39:50.027436071 +0000 UTC m=+0.136687446 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:39:50 localhost podman[88090]: 2025-11-28 08:39:50.083557239 +0000 UTC m=+0.190637278 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, container_name=collectd, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:39:50 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:39:50 localhost podman[88089]: 2025-11-28 08:39:50.21979942 +0000 UTC m=+0.329050765 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:39:50 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:39:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:39:53 localhost podman[88140]: 2025-11-28 08:39:53.98602901 +0000 UTC m=+0.090637621 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi) Nov 28 03:39:54 localhost podman[88140]: 2025-11-28 08:39:54.017264571 +0000 UTC m=+0.121873172 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:39:54 localhost podman[88141]: 2025-11-28 08:39:54.032159719 +0000 UTC m=+0.135364666 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12) Nov 28 03:39:54 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:39:54 localhost podman[88141]: 2025-11-28 08:39:54.046448728 +0000 UTC m=+0.149653665 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-iscsid-container, version=17.1.12, build-date=2025-11-18T23:44:13Z, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:39:54 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:39:54 localhost podman[88142]: 2025-11-28 08:39:54.084228221 +0000 UTC m=+0.185379755 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.) Nov 28 03:39:54 localhost podman[88139]: 2025-11-28 08:39:54.149009284 +0000 UTC m=+0.252875812 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:39:54 localhost podman[88142]: 2025-11-28 08:39:54.149587832 +0000 UTC m=+0.250739396 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container) Nov 28 03:39:54 localhost podman[88139]: 2025-11-28 08:39:54.188666904 +0000 UTC m=+0.292533452 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:39:54 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:39:54 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:39:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:39:54 localhost podman[88231]: 2025-11-28 08:39:54.96467997 +0000 UTC m=+0.076726681 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public) Nov 28 03:39:55 localhost podman[88231]: 2025-11-28 08:39:55.33973965 +0000 UTC m=+0.451786311 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, container_name=nova_migration_target, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:39:55 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:39:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:39:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:39:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:39:57 localhost systemd[1]: tmp-crun.LJkWJx.mount: Deactivated successfully. Nov 28 03:39:57 localhost podman[88254]: 2025-11-28 08:39:57.972456734 +0000 UTC m=+0.079230138 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public) Nov 28 03:39:57 localhost podman[88254]: 2025-11-28 08:39:57.998424123 +0000 UTC m=+0.105197517 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team) Nov 28 03:39:58 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:39:58 localhost podman[88255]: 2025-11-28 08:39:58.019683487 +0000 UTC m=+0.121764796 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 28 03:39:58 localhost podman[88255]: 2025-11-28 08:39:58.061493273 +0000 UTC m=+0.163574572 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, container_name=ovn_metadata_agent, release=1761123044, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Nov 28 03:39:58 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:39:58 localhost podman[88253]: 2025-11-28 08:39:58.084203812 +0000 UTC m=+0.189590183 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible) Nov 28 03:39:58 localhost podman[88253]: 2025-11-28 08:39:58.110399868 +0000 UTC m=+0.215786249 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:39:58 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:39:58 localhost systemd[1]: tmp-crun.ak6eHd.mount: Deactivated successfully. Nov 28 03:40:13 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:40:13 localhost recover_tripleo_nova_virtqemud[88329]: 62642 Nov 28 03:40:13 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:40:13 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:40:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:40:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:40:21 localhost podman[88376]: 2025-11-28 08:40:21.009213646 +0000 UTC m=+0.112825902 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible) Nov 28 03:40:21 localhost podman[88375]: 2025-11-28 08:40:20.992612885 +0000 UTC m=+0.101324758 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:40:21 localhost podman[88376]: 2025-11-28 08:40:21.048704691 +0000 UTC m=+0.152316977 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd) Nov 28 03:40:21 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:40:21 localhost podman[88375]: 2025-11-28 08:40:21.179421763 +0000 UTC m=+0.288133636 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1) Nov 28 03:40:21 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:40:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:40:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:40:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:40:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:40:24 localhost podman[88424]: 2025-11-28 08:40:24.983847567 +0000 UTC m=+0.086053869 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true) Nov 28 03:40:25 localhost podman[88423]: 2025-11-28 08:40:25.047161644 +0000 UTC m=+0.153493553 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4) Nov 28 03:40:25 localhost podman[88424]: 2025-11-28 08:40:25.062224868 +0000 UTC m=+0.164431170 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:40:25 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:40:25 localhost podman[88423]: 2025-11-28 08:40:25.085508174 +0000 UTC m=+0.191840093 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:40:25 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:40:25 localhost podman[88429]: 2025-11-28 08:40:25.153042642 +0000 UTC m=+0.248152035 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public) Nov 28 03:40:25 localhost podman[88425]: 2025-11-28 08:40:24.966021328 +0000 UTC m=+0.069185289 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Nov 28 03:40:25 localhost podman[88429]: 2025-11-28 08:40:25.184434998 +0000 UTC m=+0.279544401 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:40:25 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:40:25 localhost podman[88425]: 2025-11-28 08:40:25.19650173 +0000 UTC m=+0.299665731 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc.) Nov 28 03:40:25 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:40:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:40:25 localhost systemd[1]: tmp-crun.nZHtU4.mount: Deactivated successfully. Nov 28 03:40:25 localhost podman[88515]: 2025-11-28 08:40:25.966876043 +0000 UTC m=+0.074453322 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:40:26 localhost podman[88515]: 2025-11-28 08:40:26.373509504 +0000 UTC m=+0.481086813 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12) Nov 28 03:40:26 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:40:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:40:28 localhost podman[88538]: 2025-11-28 08:40:28.986279533 +0000 UTC m=+0.086704428 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 03:40:29 localhost podman[88539]: 2025-11-28 08:40:29.035513239 +0000 UTC m=+0.131645871 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, io.buildah.version=1.41.4) Nov 28 03:40:29 localhost podman[88538]: 2025-11-28 08:40:29.061145877 +0000 UTC m=+0.161570792 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container) Nov 28 03:40:29 localhost podman[88539]: 2025-11-28 08:40:29.070436193 +0000 UTC m=+0.166568875 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step5, tcib_managed=true, build-date=2025-11-19T00:36:58Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:40:29 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:40:29 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:40:29 localhost podman[88540]: 2025-11-28 08:40:29.155482 +0000 UTC m=+0.248501987 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true) Nov 28 03:40:29 localhost podman[88540]: 2025-11-28 08:40:29.21563581 +0000 UTC m=+0.308655787 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, release=1761123044, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true) Nov 28 03:40:29 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:40:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:40:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:40:52 localhost podman[88687]: 2025-11-28 08:40:52.007807709 +0000 UTC m=+0.106207079 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:40:52 localhost systemd[1]: tmp-crun.wpGHpu.mount: Deactivated successfully. Nov 28 03:40:52 localhost podman[88688]: 2025-11-28 08:40:52.112511831 +0000 UTC m=+0.208112875 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:40:52 localhost podman[88688]: 2025-11-28 08:40:52.150660404 +0000 UTC m=+0.246261438 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12) Nov 28 03:40:52 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:40:52 localhost podman[88687]: 2025-11-28 08:40:52.23567103 +0000 UTC m=+0.334070350 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step1, container_name=metrics_qdr) Nov 28 03:40:52 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:40:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:40:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:40:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:40:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:40:55 localhost podman[88738]: 2025-11-28 08:40:55.978251182 +0000 UTC m=+0.081460788 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, release=1761123044, version=17.1.12, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible) Nov 28 03:40:55 localhost podman[88738]: 2025-11-28 08:40:55.991447587 +0000 UTC m=+0.094657243 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, container_name=iscsid) Nov 28 03:40:56 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:40:56 localhost systemd[1]: tmp-crun.7Gg2Mb.mount: Deactivated successfully. Nov 28 03:40:56 localhost podman[88741]: 2025-11-28 08:40:56.052330851 +0000 UTC m=+0.152732180 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:40:56 localhost podman[88741]: 2025-11-28 08:40:56.078566747 +0000 UTC m=+0.178968066 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, release=1761123044) Nov 28 03:40:56 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:40:56 localhost podman[88737]: 2025-11-28 08:40:56.135211801 +0000 UTC m=+0.242897555 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi) Nov 28 03:40:56 localhost podman[88736]: 2025-11-28 08:40:56.179835944 +0000 UTC m=+0.290061896 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, container_name=logrotate_crond, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true) Nov 28 03:40:56 localhost podman[88737]: 2025-11-28 08:40:56.187340194 +0000 UTC m=+0.295025948 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true) Nov 28 03:40:56 localhost podman[88736]: 2025-11-28 08:40:56.21253766 +0000 UTC m=+0.322763592 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, distribution-scope=public, container_name=logrotate_crond, vendor=Red Hat, Inc., release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Nov 28 03:40:56 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:40:56 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:40:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:40:56 localhost podman[88828]: 2025-11-28 08:40:56.959943526 +0000 UTC m=+0.072817311 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_migration_target) Nov 28 03:40:57 localhost podman[88828]: 2025-11-28 08:40:57.29419939 +0000 UTC m=+0.407073205 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:40:57 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:40:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:40:59 localhost podman[88851]: 2025-11-28 08:40:59.957491376 +0000 UTC m=+0.067377209 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_id=tripleo_step5, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, batch=17.1_20251118.1) Nov 28 03:40:59 localhost podman[88851]: 2025-11-28 08:40:59.985170062 +0000 UTC m=+0.095055885 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, container_name=nova_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:40:59 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:41:00 localhost podman[88852]: 2025-11-28 08:41:00.067579809 +0000 UTC m=+0.171985855 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:41:00 localhost podman[88850]: 2025-11-28 08:41:00.121865879 +0000 UTC m=+0.232389091 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, container_name=ovn_controller, release=1761123044) Nov 28 03:41:00 localhost podman[88852]: 2025-11-28 08:41:00.138337052 +0000 UTC m=+0.242743038 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., version=17.1.12) Nov 28 03:41:00 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:41:00 localhost podman[88850]: 2025-11-28 08:41:00.19523101 +0000 UTC m=+0.305754302 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public) Nov 28 03:41:00 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:41:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:41:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:41:22 localhost podman[88947]: 2025-11-28 08:41:22.980659958 +0000 UTC m=+0.081401939 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Nov 28 03:41:23 localhost podman[88947]: 2025-11-28 08:41:23.017600777 +0000 UTC m=+0.118342798 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z) Nov 28 03:41:23 localhost podman[88946]: 2025-11-28 08:41:23.037265747 +0000 UTC m=+0.138685448 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, architecture=x86_64, config_id=tripleo_step1) Nov 28 03:41:23 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:41:23 localhost podman[88946]: 2025-11-28 08:41:23.211574652 +0000 UTC m=+0.312994383 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:41:23 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:41:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:41:26 localhost systemd[1]: tmp-crun.OmmYvR.mount: Deactivated successfully. Nov 28 03:41:26 localhost podman[88996]: 2025-11-28 08:41:26.981837158 +0000 UTC m=+0.088654080 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:41:27 localhost podman[88999]: 2025-11-28 08:41:27.035950871 +0000 UTC m=+0.135750808 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:41:27 localhost podman[88996]: 2025-11-28 08:41:27.048567157 +0000 UTC m=+0.155384069 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=logrotate_crond, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 03:41:27 localhost podman[88998]: 2025-11-28 08:41:27.000347783 +0000 UTC m=+0.102517923 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public) Nov 28 03:41:27 localhost podman[88998]: 2025-11-28 08:41:27.083550596 +0000 UTC m=+0.185720736 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, architecture=x86_64, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, container_name=iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:41:27 localhost podman[88999]: 2025-11-28 08:41:27.091296672 +0000 UTC m=+0.191096609 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team) Nov 28 03:41:27 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:41:27 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:41:27 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:41:27 localhost podman[88997]: 2025-11-28 08:41:27.194157095 +0000 UTC m=+0.298636745 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, vcs-type=git) Nov 28 03:41:27 localhost podman[88997]: 2025-11-28 08:41:27.22641378 +0000 UTC m=+0.330893450 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1) Nov 28 03:41:27 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:41:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:41:27 localhost podman[89081]: 2025-11-28 08:41:27.969104161 +0000 UTC m=+0.081134380 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc.) Nov 28 03:41:27 localhost systemd[1]: tmp-crun.2bmihx.mount: Deactivated successfully. Nov 28 03:41:28 localhost podman[89081]: 2025-11-28 08:41:28.354410313 +0000 UTC m=+0.466440512 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:41:28 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:41:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:41:30 localhost podman[89105]: 2025-11-28 08:41:30.980566423 +0000 UTC m=+0.088154044 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=) Nov 28 03:41:31 localhost systemd[1]: tmp-crun.UQxPZZ.mount: Deactivated successfully. Nov 28 03:41:31 localhost podman[89106]: 2025-11-28 08:41:31.029359924 +0000 UTC m=+0.133011435 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, container_name=nova_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:41:31 localhost podman[89107]: 2025-11-28 08:41:31.089505711 +0000 UTC m=+0.190281764 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:41:31 localhost podman[89105]: 2025-11-28 08:41:31.107631346 +0000 UTC m=+0.215218967 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:41:31 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:41:31 localhost podman[89107]: 2025-11-28 08:41:31.12644557 +0000 UTC m=+0.227221633 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:41:31 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:41:31 localhost podman[89106]: 2025-11-28 08:41:31.162332176 +0000 UTC m=+0.265983627 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, release=1761123044, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:41:31 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:41:31 localhost systemd[1]: tmp-crun.MK3dYl.mount: Deactivated successfully. Nov 28 03:41:50 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:41:50 localhost recover_tripleo_nova_virtqemud[89192]: 62642 Nov 28 03:41:50 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:41:50 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:41:51 localhost podman[89280]: 2025-11-28 08:41:51.111699721 +0000 UTC m=+0.082643285 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc.) Nov 28 03:41:51 localhost podman[89280]: 2025-11-28 08:41:51.21150058 +0000 UTC m=+0.182444114 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 03:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:41:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:41:53 localhost podman[89423]: 2025-11-28 08:41:53.989501021 +0000 UTC m=+0.091494456 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, container_name=collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Nov 28 03:41:54 localhost podman[89422]: 2025-11-28 08:41:54.035120895 +0000 UTC m=+0.137081509 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64) Nov 28 03:41:54 localhost podman[89423]: 2025-11-28 08:41:54.054393194 +0000 UTC m=+0.156386589 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 28 03:41:54 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:41:54 localhost podman[89422]: 2025-11-28 08:41:54.231439473 +0000 UTC m=+0.333400097 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:41:54 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:41:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:41:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:41:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:41:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:41:57 localhost podman[89472]: 2025-11-28 08:41:57.99294372 +0000 UTC m=+0.090391003 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, build-date=2025-11-19T00:12:45Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 28 03:41:58 localhost podman[89471]: 2025-11-28 08:41:58.042463092 +0000 UTC m=+0.141341129 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, release=1761123044, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.) Nov 28 03:41:58 localhost podman[89471]: 2025-11-28 08:41:58.079616278 +0000 UTC m=+0.178494315 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:41:58 localhost podman[89473]: 2025-11-28 08:41:58.097137083 +0000 UTC m=+0.190716008 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Nov 28 03:41:58 localhost podman[89473]: 2025-11-28 08:41:58.104418595 +0000 UTC m=+0.197997520 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, version=17.1.12) Nov 28 03:41:58 localhost podman[89478]: 2025-11-28 08:41:58.106353984 +0000 UTC m=+0.194803632 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc.) Nov 28 03:41:58 localhost podman[89472]: 2025-11-28 08:41:58.128276234 +0000 UTC m=+0.225723547 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:41:58 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:41:58 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:41:58 localhost podman[89478]: 2025-11-28 08:41:58.182625354 +0000 UTC m=+0.271074962 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12) Nov 28 03:41:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:41:58 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:41:58 localhost podman[89561]: 2025-11-28 08:41:58.979322795 +0000 UTC m=+0.080280834 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:41:59 localhost podman[89561]: 2025-11-28 08:41:59.374735495 +0000 UTC m=+0.475693534 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, release=1761123044) Nov 28 03:41:59 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:42:01 localhost podman[89582]: 2025-11-28 08:42:01.976363649 +0000 UTC m=+0.086034970 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:42:02 localhost podman[89582]: 2025-11-28 08:42:02.026612044 +0000 UTC m=+0.136283375 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true) Nov 28 03:42:02 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:42:02 localhost podman[89584]: 2025-11-28 08:42:02.028296755 +0000 UTC m=+0.132801488 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, container_name=ovn_metadata_agent, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:42:02 localhost podman[89584]: 2025-11-28 08:42:02.11357056 +0000 UTC m=+0.218075253 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=) Nov 28 03:42:02 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:42:02 localhost podman[89583]: 2025-11-28 08:42:02.080850741 +0000 UTC m=+0.188320105 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public) Nov 28 03:42:02 localhost podman[89583]: 2025-11-28 08:42:02.159974968 +0000 UTC m=+0.267444252 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, container_name=nova_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Nov 28 03:42:02 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:42:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:42:24 localhost podman[89679]: 2025-11-28 08:42:24.979054233 +0000 UTC m=+0.085836063 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, tcib_managed=true) Nov 28 03:42:25 localhost systemd[1]: tmp-crun.qURE4h.mount: Deactivated successfully. Nov 28 03:42:25 localhost podman[89680]: 2025-11-28 08:42:25.042155851 +0000 UTC m=+0.147349342 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:42:25 localhost podman[89680]: 2025-11-28 08:42:25.078607754 +0000 UTC m=+0.183801255 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:42:25 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:42:25 localhost podman[89679]: 2025-11-28 08:42:25.16748708 +0000 UTC m=+0.274268920 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:42:25 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:42:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:42:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:42:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:42:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:42:28 localhost podman[89729]: 2025-11-28 08:42:28.988714293 +0000 UTC m=+0.086306768 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:42:29 localhost podman[89728]: 2025-11-28 08:42:28.968164305 +0000 UTC m=+0.072000951 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible) Nov 28 03:42:29 localhost podman[89730]: 2025-11-28 08:42:29.035218474 +0000 UTC m=+0.131498538 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:42:29 localhost podman[89727]: 2025-11-28 08:42:29.081700844 +0000 UTC m=+0.186529890 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4) Nov 28 03:42:29 localhost podman[89730]: 2025-11-28 08:42:29.08581177 +0000 UTC m=+0.182091804 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12) Nov 28 03:42:29 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:42:29 localhost podman[89728]: 2025-11-28 08:42:29.101452387 +0000 UTC m=+0.205289023 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:42:29 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:42:29 localhost podman[89727]: 2025-11-28 08:42:29.14310903 +0000 UTC m=+0.247938126 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:42:29 localhost podman[89729]: 2025-11-28 08:42:29.156327064 +0000 UTC m=+0.253919589 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, container_name=iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:42:29 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:42:29 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:42:29 localhost podman[89818]: 2025-11-28 08:42:29.961279085 +0000 UTC m=+0.073926349 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:42:30 localhost podman[89818]: 2025-11-28 08:42:30.394561403 +0000 UTC m=+0.507208627 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4) Nov 28 03:42:30 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:42:32 localhost systemd[1]: tmp-crun.BFajQU.mount: Deactivated successfully. Nov 28 03:42:33 localhost podman[89843]: 2025-11-28 08:42:33.020310122 +0000 UTC m=+0.116718066 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, container_name=nova_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, release=1761123044, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 03:42:33 localhost podman[89844]: 2025-11-28 08:42:32.975693699 +0000 UTC m=+0.073156786 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:42:33 localhost podman[89843]: 2025-11-28 08:42:33.050569647 +0000 UTC m=+0.146977621 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, url=https://www.redhat.com, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step5, vendor=Red Hat, Inc.) Nov 28 03:42:33 localhost podman[89844]: 2025-11-28 08:42:33.060708467 +0000 UTC m=+0.158171544 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:42:33 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:42:33 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:42:33 localhost podman[89842]: 2025-11-28 08:42:33.099628076 +0000 UTC m=+0.199275680 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller) Nov 28 03:42:33 localhost podman[89842]: 2025-11-28 08:42:33.144651892 +0000 UTC m=+0.244299536 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc.) Nov 28 03:42:33 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:42:33 localhost systemd[1]: tmp-crun.XOvAMK.mount: Deactivated successfully. Nov 28 03:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:42:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:42:55 localhost podman[89995]: 2025-11-28 08:42:55.975359233 +0000 UTC m=+0.083183822 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container) Nov 28 03:42:56 localhost systemd[1]: tmp-crun.gXgHqo.mount: Deactivated successfully. Nov 28 03:42:56 localhost podman[89996]: 2025-11-28 08:42:56.038313627 +0000 UTC m=+0.145466106 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1) Nov 28 03:42:56 localhost podman[89996]: 2025-11-28 08:42:56.074360008 +0000 UTC m=+0.181512457 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:42:56 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:42:56 localhost podman[89995]: 2025-11-28 08:42:56.173028302 +0000 UTC m=+0.280852861 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, container_name=metrics_qdr, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd) Nov 28 03:42:56 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:42:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:42:59 localhost systemd[1]: tmp-crun.SFzAhi.mount: Deactivated successfully. Nov 28 03:42:59 localhost podman[90044]: 2025-11-28 08:42:59.988238371 +0000 UTC m=+0.092255070 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container) Nov 28 03:43:00 localhost podman[90044]: 2025-11-28 08:42:59.99835947 +0000 UTC m=+0.102376189 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:43:00 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:43:00 localhost podman[90045]: 2025-11-28 08:43:00.078000793 +0000 UTC m=+0.178494094 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi) Nov 28 03:43:00 localhost podman[90045]: 2025-11-28 08:43:00.101353436 +0000 UTC m=+0.201846677 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, release=1761123044, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:43:00 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:43:00 localhost podman[90046]: 2025-11-28 08:43:00.18198848 +0000 UTC m=+0.276688374 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4) Nov 28 03:43:00 localhost podman[90046]: 2025-11-28 08:43:00.197010729 +0000 UTC m=+0.291710583 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:43:00 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:43:00 localhost podman[90047]: 2025-11-28 08:43:00.24317335 +0000 UTC m=+0.338206964 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:43:00 localhost podman[90047]: 2025-11-28 08:43:00.268342358 +0000 UTC m=+0.363375942 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 28 03:43:00 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:43:01 localhost podman[90132]: 2025-11-28 08:43:01.061882122 +0000 UTC m=+0.074506017 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044) Nov 28 03:43:01 localhost podman[90132]: 2025-11-28 08:43:01.442477029 +0000 UTC m=+0.455100984 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:43:01 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:43:03 localhost podman[90155]: 2025-11-28 08:43:03.976752784 +0000 UTC m=+0.084751461 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc.) Nov 28 03:43:04 localhost podman[90155]: 2025-11-28 08:43:04.005605395 +0000 UTC m=+0.113604062 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, release=1761123044, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git) Nov 28 03:43:04 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:43:04 localhost podman[90157]: 2025-11-28 08:43:04.025124421 +0000 UTC m=+0.126278458 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc.) Nov 28 03:43:04 localhost podman[90157]: 2025-11-28 08:43:04.080644078 +0000 UTC m=+0.181798075 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:43:04 localhost podman[90156]: 2025-11-28 08:43:04.092966314 +0000 UTC m=+0.197549996 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:43:04 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:43:04 localhost podman[90156]: 2025-11-28 08:43:04.148582713 +0000 UTC m=+0.253166435 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true) Nov 28 03:43:04 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:43:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:43:23 localhost recover_tripleo_nova_virtqemud[90227]: 62642 Nov 28 03:43:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:43:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:43:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:43:26 localhost podman[90229]: 2025-11-28 08:43:26.969483574 +0000 UTC m=+0.077445336 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:43:27 localhost podman[90229]: 2025-11-28 08:43:27.00764736 +0000 UTC m=+0.115609062 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:43:27 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:43:27 localhost podman[90228]: 2025-11-28 08:43:27.026678892 +0000 UTC m=+0.137063919 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:43:27 localhost podman[90228]: 2025-11-28 08:43:27.208447385 +0000 UTC m=+0.318832372 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:43:27 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:43:30 localhost systemd[1]: tmp-crun.xpAT3Y.mount: Deactivated successfully. Nov 28 03:43:30 localhost podman[90276]: 2025-11-28 08:43:30.991247344 +0000 UTC m=+0.097761128 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, batch=17.1_20251118.1) Nov 28 03:43:31 localhost podman[90276]: 2025-11-28 08:43:31.002368913 +0000 UTC m=+0.108882747 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 28 03:43:31 localhost podman[90284]: 2025-11-28 08:43:31.041179439 +0000 UTC m=+0.135645895 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container) Nov 28 03:43:31 localhost podman[90284]: 2025-11-28 08:43:31.072670391 +0000 UTC m=+0.167136867 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, architecture=x86_64) Nov 28 03:43:31 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:43:31 localhost podman[90278]: 2025-11-28 08:43:31.088366691 +0000 UTC m=+0.187574242 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git) Nov 28 03:43:31 localhost podman[90278]: 2025-11-28 08:43:31.099416328 +0000 UTC m=+0.198623919 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 28 03:43:31 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:43:31 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:43:31 localhost podman[90277]: 2025-11-28 08:43:31.186056625 +0000 UTC m=+0.288203466 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, distribution-scope=public) Nov 28 03:43:31 localhost podman[90277]: 2025-11-28 08:43:31.212472602 +0000 UTC m=+0.314619423 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12) Nov 28 03:43:31 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:43:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:43:31 localhost podman[90367]: 2025-11-28 08:43:31.96164523 +0000 UTC m=+0.073880368 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com) Nov 28 03:43:32 localhost podman[90367]: 2025-11-28 08:43:32.330326874 +0000 UTC m=+0.442562012 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_migration_target, tcib_managed=true, release=1761123044, build-date=2025-11-19T00:36:58Z, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:43:32 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:43:34 localhost systemd[1]: tmp-crun.h4q8TC.mount: Deactivated successfully. Nov 28 03:43:35 localhost systemd[1]: tmp-crun.iFxwXQ.mount: Deactivated successfully. Nov 28 03:43:35 localhost podman[90389]: 2025-11-28 08:43:34.984596503 +0000 UTC m=+0.092292781 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:43:35 localhost podman[90391]: 2025-11-28 08:43:35.039301274 +0000 UTC m=+0.136775449 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team) Nov 28 03:43:35 localhost podman[90390]: 2025-11-28 08:43:35.010266507 +0000 UTC m=+0.107591878 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:43:35 localhost podman[90389]: 2025-11-28 08:43:35.067463935 +0000 UTC m=+0.175160203 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true) Nov 28 03:43:35 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:43:35 localhost podman[90390]: 2025-11-28 08:43:35.095510202 +0000 UTC m=+0.192835523 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_compute, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:43:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:43:35 localhost podman[90391]: 2025-11-28 08:43:35.10824811 +0000 UTC m=+0.205722265 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 28 03:43:35 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:43:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:43:57 localhost podman[90538]: 2025-11-28 08:43:57.971165087 +0000 UTC m=+0.080859661 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044) Nov 28 03:43:58 localhost podman[90539]: 2025-11-28 08:43:58.027086776 +0000 UTC m=+0.131450777 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3) Nov 28 03:43:58 localhost podman[90539]: 2025-11-28 08:43:58.035254555 +0000 UTC m=+0.139618556 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true) Nov 28 03:43:58 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:43:58 localhost podman[90538]: 2025-11-28 08:43:58.163706009 +0000 UTC m=+0.273400583 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1761123044, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, tcib_managed=true, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4) Nov 28 03:43:58 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:44:01 localhost podman[90591]: 2025-11-28 08:44:01.987643564 +0000 UTC m=+0.087680519 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4) Nov 28 03:44:02 localhost podman[90588]: 2025-11-28 08:44:02.043054517 +0000 UTC m=+0.148840808 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4) Nov 28 03:44:02 localhost podman[90591]: 2025-11-28 08:44:02.046496652 +0000 UTC m=+0.146533617 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:44:02 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:44:02 localhost podman[90588]: 2025-11-28 08:44:02.076498499 +0000 UTC m=+0.182284810 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, container_name=logrotate_crond, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true, architecture=x86_64, build-date=2025-11-18T22:49:32Z, version=17.1.12) Nov 28 03:44:02 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:44:02 localhost podman[90589]: 2025-11-28 08:44:02.092579971 +0000 UTC m=+0.197837026 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Nov 28 03:44:02 localhost podman[90589]: 2025-11-28 08:44:02.12300305 +0000 UTC m=+0.228260075 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4) Nov 28 03:44:02 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:44:02 localhost podman[90590]: 2025-11-28 08:44:02.141131864 +0000 UTC m=+0.243581343 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:44:02 localhost podman[90590]: 2025-11-28 08:44:02.47030186 +0000 UTC m=+0.572751299 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 03:44:02 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:44:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:44:02 localhost podman[90682]: 2025-11-28 08:44:02.583580381 +0000 UTC m=+0.077032864 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:44:02 localhost podman[90682]: 2025-11-28 08:44:02.970955435 +0000 UTC m=+0.464407938 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:44:02 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:44:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:44:05 localhost systemd[1]: tmp-crun.IqKjSn.mount: Deactivated successfully. Nov 28 03:44:06 localhost podman[90705]: 2025-11-28 08:44:06.000558964 +0000 UTC m=+0.108712562 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4) Nov 28 03:44:06 localhost podman[90707]: 2025-11-28 08:44:05.982135841 +0000 UTC m=+0.086404140 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, version=17.1.12) Nov 28 03:44:06 localhost podman[90706]: 2025-11-28 08:44:06.043240398 +0000 UTC m=+0.148685204 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:44:06 localhost podman[90705]: 2025-11-28 08:44:06.054424469 +0000 UTC m=+0.162578057 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, container_name=ovn_controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044) Nov 28 03:44:06 localhost podman[90707]: 2025-11-28 08:44:06.064535319 +0000 UTC m=+0.168803618 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, architecture=x86_64, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4) Nov 28 03:44:06 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:44:06 localhost podman[90706]: 2025-11-28 08:44:06.077426162 +0000 UTC m=+0.182870928 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64) Nov 28 03:44:06 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:44:06 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:44:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:44:28 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:44:28 localhost recover_tripleo_nova_virtqemud[90786]: 62642 Nov 28 03:44:28 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:44:28 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:44:28 localhost systemd[1]: tmp-crun.YoDC71.mount: Deactivated successfully. Nov 28 03:44:28 localhost podman[90778]: 2025-11-28 08:44:28.980457795 +0000 UTC m=+0.085828274 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:44:29 localhost podman[90779]: 2025-11-28 08:44:29.041141689 +0000 UTC m=+0.141123343 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 28 03:44:29 localhost podman[90779]: 2025-11-28 08:44:29.051921038 +0000 UTC m=+0.151902692 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, container_name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:44:29 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:44:29 localhost podman[90778]: 2025-11-28 08:44:29.202040214 +0000 UTC m=+0.307410703 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, container_name=metrics_qdr) Nov 28 03:44:29 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:44:29 localhost systemd[1]: tmp-crun.B9u5OF.mount: Deactivated successfully. Nov 28 03:44:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:44:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:44:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:44:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:44:32 localhost podman[90829]: 2025-11-28 08:44:32.980470639 +0000 UTC m=+0.089222437 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, release=1761123044, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 28 03:44:32 localhost podman[90829]: 2025-11-28 08:44:32.992339102 +0000 UTC m=+0.101090900 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=logrotate_crond, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-cron) Nov 28 03:44:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:44:33 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:44:33 localhost podman[90832]: 2025-11-28 08:44:33.030047854 +0000 UTC m=+0.131554230 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:44:33 localhost podman[90832]: 2025-11-28 08:44:33.062434443 +0000 UTC m=+0.163940889 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:44:33 localhost systemd[1]: tmp-crun.McpvTg.mount: Deactivated successfully. Nov 28 03:44:33 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:44:33 localhost podman[90830]: 2025-11-28 08:44:33.079466953 +0000 UTC m=+0.185157057 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, build-date=2025-11-19T00:12:45Z) Nov 28 03:44:33 localhost podman[90830]: 2025-11-28 08:44:33.129878053 +0000 UTC m=+0.235568147 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z) Nov 28 03:44:33 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:44:33 localhost podman[90831]: 2025-11-28 08:44:33.152324269 +0000 UTC m=+0.254407973 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12) Nov 28 03:44:33 localhost podman[90879]: 2025-11-28 08:44:33.202497512 +0000 UTC m=+0.186238510 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044) Nov 28 03:44:33 localhost podman[90831]: 2025-11-28 08:44:33.22468926 +0000 UTC m=+0.326772984 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:44:33 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:44:33 localhost podman[90879]: 2025-11-28 08:44:33.559286672 +0000 UTC m=+0.543027620 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:44:33 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:44:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:44:37 localhost podman[90938]: 2025-11-28 08:44:37.003182437 +0000 UTC m=+0.106577767 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:44:37 localhost podman[90940]: 2025-11-28 08:44:37.04028777 +0000 UTC m=+0.139241705 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=) Nov 28 03:44:37 localhost podman[90938]: 2025-11-28 08:44:37.057414243 +0000 UTC m=+0.160809573 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1) Nov 28 03:44:37 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:44:37 localhost podman[90939]: 2025-11-28 08:44:37.068129531 +0000 UTC m=+0.169304433 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:44:37 localhost podman[90940]: 2025-11-28 08:44:37.121933024 +0000 UTC m=+0.220886969 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64) Nov 28 03:44:37 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:44:37 localhost podman[90939]: 2025-11-28 08:44:37.172489369 +0000 UTC m=+0.273664281 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, container_name=nova_compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible) Nov 28 03:44:37 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:44:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:44:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:44:59 localhost systemd[1]: tmp-crun.OxJevl.mount: Deactivated successfully. Nov 28 03:45:00 localhost systemd[1]: tmp-crun.Yyaaas.mount: Deactivated successfully. Nov 28 03:45:00 localhost podman[91090]: 2025-11-28 08:45:00.038914603 +0000 UTC m=+0.142403222 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:45:00 localhost podman[91089]: 2025-11-28 08:45:00.006266035 +0000 UTC m=+0.111785226 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, release=1761123044, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:45:00 localhost podman[91090]: 2025-11-28 08:45:00.077365957 +0000 UTC m=+0.180854486 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:45:00 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:45:00 localhost podman[91089]: 2025-11-28 08:45:00.219539351 +0000 UTC m=+0.325058552 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible) Nov 28 03:45:00 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:45:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:45:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:45:03 localhost recover_tripleo_nova_virtqemud[91162]: 62642 Nov 28 03:45:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:45:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:45:03 localhost podman[91138]: 2025-11-28 08:45:03.98595031 +0000 UTC m=+0.086944337 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:45:03 localhost systemd[1]: tmp-crun.dmbAn9.mount: Deactivated successfully. Nov 28 03:45:04 localhost podman[91137]: 2025-11-28 08:45:03.998451341 +0000 UTC m=+0.100734318 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, tcib_managed=true, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc.) Nov 28 03:45:04 localhost podman[91137]: 2025-11-28 08:45:04.006758615 +0000 UTC m=+0.109041552 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, config_id=tripleo_step4, container_name=logrotate_crond, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 28 03:45:04 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:45:04 localhost podman[91141]: 2025-11-28 08:45:04.042746715 +0000 UTC m=+0.130992814 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, container_name=nova_migration_target, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:45:04 localhost podman[91138]: 2025-11-28 08:45:04.099488718 +0000 UTC m=+0.200482755 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 28 03:45:04 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:45:04 localhost podman[91139]: 2025-11-28 08:45:04.10446442 +0000 UTC m=+0.201980401 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-iscsid-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible) Nov 28 03:45:04 localhost podman[91140]: 2025-11-28 08:45:04.163662719 +0000 UTC m=+0.258354754 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, release=1761123044, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12) Nov 28 03:45:04 localhost podman[91139]: 2025-11-28 08:45:04.188336273 +0000 UTC m=+0.285852254 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, tcib_managed=true, release=1761123044, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Nov 28 03:45:04 localhost podman[91140]: 2025-11-28 08:45:04.195244663 +0000 UTC m=+0.289936698 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12) Nov 28 03:45:04 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:45:04 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:45:04 localhost podman[91141]: 2025-11-28 08:45:04.448323926 +0000 UTC m=+0.536570005 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=nova_migration_target, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64) Nov 28 03:45:04 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:45:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:45:07 localhost podman[91251]: 2025-11-28 08:45:07.984650074 +0000 UTC m=+0.086040890 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:45:08 localhost systemd[1]: tmp-crun.D8HipQ.mount: Deactivated successfully. Nov 28 03:45:08 localhost podman[91250]: 2025-11-28 08:45:08.039225622 +0000 UTC m=+0.140588087 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:45:08 localhost podman[91251]: 2025-11-28 08:45:08.045537794 +0000 UTC m=+0.146928660 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=nova_compute) Nov 28 03:45:08 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:45:08 localhost systemd[1]: tmp-crun.6IKWXd.mount: Deactivated successfully. Nov 28 03:45:08 localhost podman[91252]: 2025-11-28 08:45:08.093462388 +0000 UTC m=+0.193126401 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:45:08 localhost podman[91250]: 2025-11-28 08:45:08.111685204 +0000 UTC m=+0.213047679 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:45:08 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:45:08 localhost podman[91252]: 2025-11-28 08:45:08.136463342 +0000 UTC m=+0.236127305 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, container_name=ovn_metadata_agent) Nov 28 03:45:08 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:45:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:45:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:45:30 localhost podman[91321]: 2025-11-28 08:45:30.970426371 +0000 UTC m=+0.077832969 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true) Nov 28 03:45:30 localhost podman[91321]: 2025-11-28 08:45:30.976786845 +0000 UTC m=+0.084193463 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, version=17.1.12, io.openshift.expose-services=, architecture=x86_64) Nov 28 03:45:30 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:45:31 localhost podman[91320]: 2025-11-28 08:45:31.026359309 +0000 UTC m=+0.135016805 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:45:31 localhost podman[91320]: 2025-11-28 08:45:31.203032667 +0000 UTC m=+0.311690193 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public) Nov 28 03:45:31 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:45:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:45:34 localhost systemd[1]: tmp-crun.3ezRyU.mount: Deactivated successfully. Nov 28 03:45:35 localhost podman[91369]: 2025-11-28 08:45:34.994059828 +0000 UTC m=+0.100398179 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Nov 28 03:45:35 localhost podman[91370]: 2025-11-28 08:45:35.047929003 +0000 UTC m=+0.145153576 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, architecture=x86_64, batch=17.1_20251118.1) Nov 28 03:45:35 localhost podman[91369]: 2025-11-28 08:45:35.05242772 +0000 UTC m=+0.158766071 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container) Nov 28 03:45:35 localhost podman[91370]: 2025-11-28 08:45:35.061462057 +0000 UTC m=+0.158686680 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=) Nov 28 03:45:35 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:45:35 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:45:35 localhost podman[91368]: 2025-11-28 08:45:35.140338096 +0000 UTC m=+0.245695667 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, container_name=logrotate_crond, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=) Nov 28 03:45:35 localhost podman[91372]: 2025-11-28 08:45:34.998783271 +0000 UTC m=+0.093639331 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 03:45:35 localhost podman[91371]: 2025-11-28 08:45:35.11297484 +0000 UTC m=+0.208332305 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com) Nov 28 03:45:35 localhost podman[91368]: 2025-11-28 08:45:35.177375748 +0000 UTC m=+0.282733299 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, container_name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron) Nov 28 03:45:35 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:45:35 localhost podman[91371]: 2025-11-28 08:45:35.199416011 +0000 UTC m=+0.294773466 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:45:35 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:45:35 localhost podman[91372]: 2025-11-28 08:45:35.379532434 +0000 UTC m=+0.474388534 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, distribution-scope=public) Nov 28 03:45:35 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:45:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:45:38 localhost podman[91481]: 2025-11-28 08:45:38.976551446 +0000 UTC m=+0.079317134 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1) Nov 28 03:45:39 localhost systemd[1]: tmp-crun.oXkrVW.mount: Deactivated successfully. Nov 28 03:45:39 localhost podman[91480]: 2025-11-28 08:45:39.040905372 +0000 UTC m=+0.144513255 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4) Nov 28 03:45:39 localhost podman[91481]: 2025-11-28 08:45:39.049521095 +0000 UTC m=+0.152286753 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_metadata_agent) Nov 28 03:45:39 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:45:39 localhost podman[91480]: 2025-11-28 08:45:39.074570071 +0000 UTC m=+0.178177964 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Nov 28 03:45:39 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:45:39 localhost systemd[1]: tmp-crun.UYJSnv.mount: Deactivated successfully. Nov 28 03:45:39 localhost podman[91479]: 2025-11-28 08:45:39.141814675 +0000 UTC m=+0.248226414 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vcs-type=git, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:45:39 localhost podman[91479]: 2025-11-28 08:45:39.169442199 +0000 UTC m=+0.275853958 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:45:39 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Deactivated successfully. Nov 28 03:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:46:02 localhost systemd[1]: tmp-crun.LHXBSZ.mount: Deactivated successfully. Nov 28 03:46:02 localhost podman[91630]: 2025-11-28 08:46:02.153485423 +0000 UTC m=+0.095175329 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=) Nov 28 03:46:02 localhost podman[91630]: 2025-11-28 08:46:02.193464015 +0000 UTC m=+0.135153951 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:46:02 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:46:02 localhost podman[91629]: 2025-11-28 08:46:02.245622698 +0000 UTC m=+0.188909392 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:46:02 localhost podman[91629]: 2025-11-28 08:46:02.479802803 +0000 UTC m=+0.423089527 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, url=https://www.redhat.com) Nov 28 03:46:02 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:46:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:46:05 localhost podman[91678]: 2025-11-28 08:46:05.981303727 +0000 UTC m=+0.084441670 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, release=1761123044, container_name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1) Nov 28 03:46:05 localhost podman[91678]: 2025-11-28 08:46:05.993097327 +0000 UTC m=+0.096235260 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true) Nov 28 03:46:06 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:46:06 localhost podman[91684]: 2025-11-28 08:46:06.040186086 +0000 UTC m=+0.133929863 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target) Nov 28 03:46:06 localhost podman[91681]: 2025-11-28 08:46:05.9964629 +0000 UTC m=+0.092340112 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:46:06 localhost podman[91680]: 2025-11-28 08:46:06.095479636 +0000 UTC m=+0.194176254 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20251118.1, container_name=iscsid, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:46:06 localhost podman[91681]: 2025-11-28 08:46:06.126437711 +0000 UTC m=+0.222314923 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1) Nov 28 03:46:06 localhost podman[91680]: 2025-11-28 08:46:06.134491948 +0000 UTC m=+0.233188566 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_id=tripleo_step3) Nov 28 03:46:06 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:46:06 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:46:06 localhost podman[91679]: 2025-11-28 08:46:06.185082113 +0000 UTC m=+0.285763021 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64) Nov 28 03:46:06 localhost podman[91679]: 2025-11-28 08:46:06.215467211 +0000 UTC m=+0.316148139 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi) Nov 28 03:46:06 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:46:06 localhost podman[91684]: 2025-11-28 08:46:06.437476774 +0000 UTC m=+0.531220601 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:46:06 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:46:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:46:10 localhost systemd[1]: tmp-crun.nJ9R3C.mount: Deactivated successfully. Nov 28 03:46:10 localhost podman[91791]: 2025-11-28 08:46:10.043550143 +0000 UTC m=+0.147949022 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, distribution-scope=public) Nov 28 03:46:10 localhost podman[91793]: 2025-11-28 08:46:10.090564399 +0000 UTC m=+0.189288414 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4) Nov 28 03:46:10 localhost podman[91791]: 2025-11-28 08:46:10.094817399 +0000 UTC m=+0.199216328 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git) Nov 28 03:46:10 localhost podman[91791]: unhealthy Nov 28 03:46:10 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:46:10 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:46:10 localhost podman[91792]: 2025-11-28 08:46:10.012518545 +0000 UTC m=+0.114011075 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:46:10 localhost podman[91793]: 2025-11-28 08:46:10.127397544 +0000 UTC m=+0.226121529 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public) Nov 28 03:46:10 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:46:10 localhost podman[91792]: 2025-11-28 08:46:10.144637431 +0000 UTC m=+0.246129921 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute) Nov 28 03:46:10 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:46:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:46:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:46:32 localhost podman[91867]: 2025-11-28 08:46:32.987929228 +0000 UTC m=+0.090061393 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git) Nov 28 03:46:33 localhost podman[91867]: 2025-11-28 08:46:33.024117023 +0000 UTC m=+0.126249208 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:46:33 localhost podman[91866]: 2025-11-28 08:46:33.036533863 +0000 UTC m=+0.140670099 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step1, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:46:33 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:46:33 localhost podman[91866]: 2025-11-28 08:46:33.22333201 +0000 UTC m=+0.327468216 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:46:33 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:46:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:46:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:46:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:46:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:46:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:46:36 localhost systemd[1]: tmp-crun.jXzA5A.mount: Deactivated successfully. Nov 28 03:46:36 localhost podman[91913]: 2025-11-28 08:46:36.992459011 +0000 UTC m=+0.099932254 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:46:37 localhost podman[91916]: 2025-11-28 08:46:37.053681671 +0000 UTC m=+0.151196080 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, release=1761123044, version=17.1.12, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:46:37 localhost podman[91922]: 2025-11-28 08:46:37.101472211 +0000 UTC m=+0.192181712 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4) Nov 28 03:46:37 localhost podman[91915]: 2025-11-28 08:46:36.972362057 +0000 UTC m=+0.075860058 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step3, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 03:46:37 localhost podman[91913]: 2025-11-28 08:46:37.125937758 +0000 UTC m=+0.233411011 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, release=1761123044) Nov 28 03:46:37 localhost podman[91914]: 2025-11-28 08:46:37.032277227 +0000 UTC m=+0.135317445 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:46:37 localhost podman[91915]: 2025-11-28 08:46:37.154918434 +0000 UTC m=+0.258416495 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:46:37 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:46:37 localhost podman[91914]: 2025-11-28 08:46:37.173969736 +0000 UTC m=+0.277009924 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team) Nov 28 03:46:37 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:46:37 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:46:37 localhost podman[91916]: 2025-11-28 08:46:37.278573212 +0000 UTC m=+0.376087621 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:46:37 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:46:37 localhost podman[91922]: 2025-11-28 08:46:37.54657119 +0000 UTC m=+0.637280711 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:46:37 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:46:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:46:40 localhost podman[92022]: 2025-11-28 08:46:40.973445644 +0000 UTC m=+0.077461198 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, container_name=nova_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:46:41 localhost podman[92022]: 2025-11-28 08:46:41.023880015 +0000 UTC m=+0.127895569 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:46:41 localhost podman[92021]: 2025-11-28 08:46:41.033436086 +0000 UTC m=+0.140171143 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ovn-controller, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:46:41 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:46:41 localhost podman[92023]: 2025-11-28 08:46:41.080835064 +0000 UTC m=+0.182419534 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent) Nov 28 03:46:41 localhost podman[92021]: 2025-11-28 08:46:41.103401584 +0000 UTC m=+0.210136621 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:46:41 localhost podman[92021]: unhealthy Nov 28 03:46:41 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:46:41 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:46:41 localhost podman[92023]: 2025-11-28 08:46:41.12226366 +0000 UTC m=+0.223848160 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:46:41 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Deactivated successfully. Nov 28 03:46:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:46:53 localhost recover_tripleo_nova_virtqemud[92098]: 62642 Nov 28 03:46:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:46:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:47:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:47:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:47:03 localhost podman[92162]: 2025-11-28 08:47:03.977431439 +0000 UTC m=+0.082400888 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team) Nov 28 03:47:04 localhost podman[92163]: 2025-11-28 08:47:04.028848231 +0000 UTC m=+0.134058977 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, name=rhosp17/openstack-collectd, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 28 03:47:04 localhost podman[92163]: 2025-11-28 08:47:04.068492652 +0000 UTC m=+0.173703398 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container) Nov 28 03:47:04 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:47:04 localhost podman[92162]: 2025-11-28 08:47:04.213708578 +0000 UTC m=+0.318677987 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com) Nov 28 03:47:04 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:47:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:47:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:47:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:47:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:47:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:47:07 localhost systemd[1]: tmp-crun.WSNp0s.mount: Deactivated successfully. Nov 28 03:47:07 localhost podman[92227]: 2025-11-28 08:47:07.989127012 +0000 UTC m=+0.092165866 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 28 03:47:08 localhost podman[92227]: 2025-11-28 08:47:08.012145936 +0000 UTC m=+0.115184800 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true) Nov 28 03:47:08 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:47:08 localhost podman[92248]: 2025-11-28 08:47:08.082749843 +0000 UTC m=+0.131959053 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:47:08 localhost podman[92246]: 2025-11-28 08:47:08.087726945 +0000 UTC m=+0.143399402 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid) Nov 28 03:47:08 localhost podman[92247]: 2025-11-28 08:47:08.143304943 +0000 UTC m=+0.194131563 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:47:08 localhost podman[92246]: 2025-11-28 08:47:08.171479634 +0000 UTC m=+0.227152061 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 03:47:08 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:47:08 localhost podman[92226]: 2025-11-28 08:47:08.187268235 +0000 UTC m=+0.292900959 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044) Nov 28 03:47:08 localhost podman[92226]: 2025-11-28 08:47:08.192169375 +0000 UTC m=+0.297802129 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible) Nov 28 03:47:08 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:47:08 localhost podman[92247]: 2025-11-28 08:47:08.224579416 +0000 UTC m=+0.275406076 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public) Nov 28 03:47:08 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:47:08 localhost podman[92248]: 2025-11-28 08:47:08.433361424 +0000 UTC m=+0.482570604 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, release=1761123044, container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:47:08 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:47:08 localhost systemd[1]: tmp-crun.TXFubs.mount: Deactivated successfully. Nov 28 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:47:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:47:11 localhost systemd[1]: tmp-crun.uX9cU5.mount: Deactivated successfully. Nov 28 03:47:11 localhost podman[92340]: 2025-11-28 08:47:11.993906052 +0000 UTC m=+0.099564912 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, distribution-scope=public, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4) Nov 28 03:47:12 localhost podman[92340]: 2025-11-28 08:47:12.043192808 +0000 UTC m=+0.148851668 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container) Nov 28 03:47:12 localhost systemd[1]: tmp-crun.zyAWSe.mount: Deactivated successfully. Nov 28 03:47:12 localhost podman[92340]: unhealthy Nov 28 03:47:12 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:47:12 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:47:12 localhost podman[92341]: 2025-11-28 08:47:12.045342143 +0000 UTC m=+0.147283180 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, config_id=tripleo_step5, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Nov 28 03:47:12 localhost podman[92342]: 2025-11-28 08:47:12.102090758 +0000 UTC m=+0.203657614 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:47:12 localhost podman[92342]: 2025-11-28 08:47:12.116535609 +0000 UTC m=+0.218102465 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:47:12 localhost podman[92342]: unhealthy Nov 28 03:47:12 localhost podman[92341]: 2025-11-28 08:47:12.125538354 +0000 UTC m=+0.227479441 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, container_name=nova_compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:47:12 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:47:12 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:47:12 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:47:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:47:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:47:34 localhost podman[92406]: 2025-11-28 08:47:34.972240351 +0000 UTC m=+0.083711928 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc.) Nov 28 03:47:35 localhost podman[92407]: 2025-11-28 08:47:35.044986283 +0000 UTC m=+0.146047863 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, config_id=tripleo_step3) Nov 28 03:47:35 localhost podman[92407]: 2025-11-28 08:47:35.079450396 +0000 UTC m=+0.180511976 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:47:35 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:47:35 localhost podman[92406]: 2025-11-28 08:47:35.22687615 +0000 UTC m=+0.338347717 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, batch=17.1_20251118.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 03:47:35 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:47:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:47:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:47:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:47:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:47:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:47:39 localhost podman[92456]: 2025-11-28 08:47:38.999424425 +0000 UTC m=+0.096721236 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, distribution-scope=public) Nov 28 03:47:39 localhost podman[92456]: 2025-11-28 08:47:39.02642297 +0000 UTC m=+0.123719761 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:47:39 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:47:39 localhost systemd[1]: tmp-crun.NenImy.mount: Deactivated successfully. Nov 28 03:47:39 localhost podman[92454]: 2025-11-28 08:47:39.047307278 +0000 UTC m=+0.148732435 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:47:39 localhost podman[92453]: 2025-11-28 08:47:39.085823505 +0000 UTC m=+0.190445750 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:47:39 localhost podman[92455]: 2025-11-28 08:47:39.100116271 +0000 UTC m=+0.200338761 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., container_name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z) Nov 28 03:47:39 localhost podman[92455]: 2025-11-28 08:47:39.137357929 +0000 UTC m=+0.237580419 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:47:39 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:47:39 localhost podman[92459]: 2025-11-28 08:47:39.151583883 +0000 UTC m=+0.245897042 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:47:39 localhost podman[92453]: 2025-11-28 08:47:39.172347608 +0000 UTC m=+0.276969833 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 28 03:47:39 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:47:39 localhost podman[92454]: 2025-11-28 08:47:39.203559672 +0000 UTC m=+0.304984789 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:47:39 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:47:39 localhost podman[92459]: 2025-11-28 08:47:39.481502403 +0000 UTC m=+0.575815512 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Nov 28 03:47:39 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:47:39 localhost systemd[1]: tmp-crun.0fUfPt.mount: Deactivated successfully. Nov 28 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:47:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:47:42 localhost systemd[1]: tmp-crun.2jSQxH.mount: Deactivated successfully. Nov 28 03:47:42 localhost podman[92566]: 2025-11-28 08:47:42.986876836 +0000 UTC m=+0.094881209 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, release=1761123044, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller) Nov 28 03:47:43 localhost podman[92568]: 2025-11-28 08:47:43.032358946 +0000 UTC m=+0.131634253 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:47:43 localhost podman[92566]: 2025-11-28 08:47:43.058429572 +0000 UTC m=+0.166433905 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, release=1761123044, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:47:43 localhost podman[92566]: unhealthy Nov 28 03:47:43 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:47:43 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:47:43 localhost podman[92568]: 2025-11-28 08:47:43.102923902 +0000 UTC m=+0.202199209 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:47:43 localhost podman[92568]: unhealthy Nov 28 03:47:43 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:47:43 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:47:43 localhost podman[92567]: 2025-11-28 08:47:43.191723955 +0000 UTC m=+0.294936523 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute) Nov 28 03:47:43 localhost podman[92567]: 2025-11-28 08:47:43.22103631 +0000 UTC m=+0.324248858 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:47:43 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:47:43 localhost systemd[1]: tmp-crun.xr2iUb.mount: Deactivated successfully. Nov 28 03:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:48:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:48:05 localhost systemd[1]: tmp-crun.AuNkhT.mount: Deactivated successfully. Nov 28 03:48:05 localhost systemd[1]: tmp-crun.mMIwIK.mount: Deactivated successfully. Nov 28 03:48:05 localhost podman[92711]: 2025-11-28 08:48:05.86219018 +0000 UTC m=+0.157808102 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd) Nov 28 03:48:05 localhost podman[92711]: 2025-11-28 08:48:05.874370363 +0000 UTC m=+0.169988225 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Nov 28 03:48:05 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:48:05 localhost podman[92709]: 2025-11-28 08:48:05.815301698 +0000 UTC m=+0.110852018 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:48:06 localhost podman[92709]: 2025-11-28 08:48:06.013252885 +0000 UTC m=+0.308803185 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr) Nov 28 03:48:06 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:48:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:48:09 localhost podman[92808]: 2025-11-28 08:48:09.627765002 +0000 UTC m=+0.094596551 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 28 03:48:09 localhost podman[92808]: 2025-11-28 08:48:09.640958645 +0000 UTC m=+0.107790204 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:48:09 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:48:09 localhost podman[92810]: 2025-11-28 08:48:09.643839093 +0000 UTC m=+0.104875195 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vendor=Red Hat, Inc., container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 03:48:09 localhost podman[92816]: 2025-11-28 08:48:09.730216442 +0000 UTC m=+0.187525020 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4) Nov 28 03:48:09 localhost podman[92810]: 2025-11-28 08:48:09.780409236 +0000 UTC m=+0.241445278 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, url=https://www.redhat.com, release=1761123044, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 03:48:09 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:48:09 localhost podman[92809]: 2025-11-28 08:48:09.83490019 +0000 UTC m=+0.298362406 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, version=17.1.12) Nov 28 03:48:09 localhost podman[92811]: 2025-11-28 08:48:09.783782958 +0000 UTC m=+0.243966074 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:48:09 localhost podman[92809]: 2025-11-28 08:48:09.864434073 +0000 UTC m=+0.327896299 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:48:09 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:48:09 localhost podman[92811]: 2025-11-28 08:48:09.918423052 +0000 UTC m=+0.378606168 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:48:09 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:48:10 localhost podman[92816]: 2025-11-28 08:48:10.09969325 +0000 UTC m=+0.557001838 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:48:10 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:48:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:48:13 localhost podman[92924]: 2025-11-28 08:48:13.973014814 +0000 UTC m=+0.079978404 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com) Nov 28 03:48:14 localhost podman[92926]: 2025-11-28 08:48:14.034827183 +0000 UTC m=+0.136144040 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 03:48:14 localhost podman[92924]: 2025-11-28 08:48:14.06551261 +0000 UTC m=+0.172476180 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:48:14 localhost podman[92924]: unhealthy Nov 28 03:48:14 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:48:14 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:48:14 localhost podman[92926]: 2025-11-28 08:48:14.097390695 +0000 UTC m=+0.198707602 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, container_name=ovn_metadata_agent, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Nov 28 03:48:14 localhost podman[92926]: unhealthy Nov 28 03:48:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:48:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:48:14 localhost podman[92925]: 2025-11-28 08:48:14.069262075 +0000 UTC m=+0.173600274 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-nova-compute, distribution-scope=public, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4) Nov 28 03:48:14 localhost podman[92925]: 2025-11-28 08:48:14.148285489 +0000 UTC m=+0.252623718 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12) Nov 28 03:48:14 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:48:33 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:48:33 localhost recover_tripleo_nova_virtqemud[92994]: 62642 Nov 28 03:48:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:48:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 4784 writes, 637 syncs, 7.51 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:48:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:48:36 localhost systemd[1]: tmp-crun.TeAzvV.mount: Deactivated successfully. Nov 28 03:48:36 localhost podman[92995]: 2025-11-28 08:48:36.985508059 +0000 UTC m=+0.090195107 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1) Nov 28 03:48:37 localhost podman[92996]: 2025-11-28 08:48:37.093312083 +0000 UTC m=+0.195681810 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3) Nov 28 03:48:37 localhost podman[92996]: 2025-11-28 08:48:37.10535388 +0000 UTC m=+0.207723587 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:48:37 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:48:37 localhost podman[92995]: 2025-11-28 08:48:37.20255586 +0000 UTC m=+0.307242848 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public) Nov 28 03:48:37 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:48:37 localhost systemd[1]: tmp-crun.So191M.mount: Deactivated successfully. Nov 28 03:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 5781 writes, 25K keys, 5781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5781 writes, 729 syncs, 7.93 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:48:39 localhost podman[93046]: 2025-11-28 08:48:39.982340406 +0000 UTC m=+0.086939567 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, container_name=iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:48:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:48:39 localhost podman[93046]: 2025-11-28 08:48:39.995788297 +0000 UTC m=+0.100387438 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, release=1761123044, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:48:40 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:48:40 localhost systemd[1]: tmp-crun.dk6ykH.mount: Deactivated successfully. Nov 28 03:48:40 localhost podman[93047]: 2025-11-28 08:48:40.091121669 +0000 UTC m=+0.191794731 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 28 03:48:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:48:40 localhost podman[93045]: 2025-11-28 08:48:40.140942181 +0000 UTC m=+0.246333106 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:48:40 localhost podman[93047]: 2025-11-28 08:48:40.149617937 +0000 UTC m=+0.250291099 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4) Nov 28 03:48:40 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:48:40 localhost podman[93122]: 2025-11-28 08:48:40.232853689 +0000 UTC m=+0.081395718 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible) Nov 28 03:48:40 localhost podman[93045]: 2025-11-28 08:48:40.25349653 +0000 UTC m=+0.358887415 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, distribution-scope=public, managed_by=tripleo_ansible) Nov 28 03:48:40 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:48:40 localhost podman[93087]: 2025-11-28 08:48:40.331001757 +0000 UTC m=+0.328315021 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 28 03:48:40 localhost podman[93087]: 2025-11-28 08:48:40.355486486 +0000 UTC m=+0.352799790 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:48:40 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:48:40 localhost podman[93122]: 2025-11-28 08:48:40.675543903 +0000 UTC m=+0.524085972 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team) Nov 28 03:48:40 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:48:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:48:44 localhost podman[93160]: 2025-11-28 08:48:44.978933995 +0000 UTC m=+0.083698017 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public) Nov 28 03:48:45 localhost podman[93160]: 2025-11-28 08:48:45.019745073 +0000 UTC m=+0.124509065 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, release=1761123044, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 28 03:48:45 localhost podman[93160]: unhealthy Nov 28 03:48:45 localhost podman[93161]: 2025-11-28 08:48:45.035264057 +0000 UTC m=+0.136726718 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:48:45 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:48:45 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:48:45 localhost systemd[1]: tmp-crun.y2Z7d3.mount: Deactivated successfully. Nov 28 03:48:45 localhost podman[93161]: 2025-11-28 08:48:45.09755852 +0000 UTC m=+0.199021191 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, release=1761123044) Nov 28 03:48:45 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:48:45 localhost podman[93162]: 2025-11-28 08:48:45.098300253 +0000 UTC m=+0.194317168 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team) Nov 28 03:48:45 localhost podman[93162]: 2025-11-28 08:48:45.185120945 +0000 UTC m=+0.281137890 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.expose-services=, release=1761123044, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4) Nov 28 03:48:45 localhost podman[93162]: unhealthy Nov 28 03:48:45 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:48:45 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:49:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:49:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:49:07 localhost systemd[1]: tmp-crun.IKmhMn.mount: Deactivated successfully. Nov 28 03:49:07 localhost podman[93225]: 2025-11-28 08:49:07.994023723 +0000 UTC m=+0.094622092 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, container_name=collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd) Nov 28 03:49:08 localhost systemd[1]: tmp-crun.41YAkR.mount: Deactivated successfully. Nov 28 03:49:08 localhost podman[93224]: 2025-11-28 08:49:08.033013124 +0000 UTC m=+0.136538883 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr) Nov 28 03:49:08 localhost podman[93225]: 2025-11-28 08:49:08.078590736 +0000 UTC m=+0.179189075 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z) Nov 28 03:49:08 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:49:08 localhost podman[93224]: 2025-11-28 08:49:08.29047519 +0000 UTC m=+0.394000949 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container) Nov 28 03:49:08 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:49:10 localhost podman[93363]: 2025-11-28 08:49:10.68059673 +0000 UTC m=+0.087861925 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com) Nov 28 03:49:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:49:10 localhost podman[93363]: 2025-11-28 08:49:10.71953893 +0000 UTC m=+0.126804125 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:49:10 localhost systemd[1]: tmp-crun.85ozvx.mount: Deactivated successfully. Nov 28 03:49:10 localhost podman[93357]: 2025-11-28 08:49:10.734628951 +0000 UTC m=+0.150505509 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:49:10 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:49:10 localhost podman[93357]: 2025-11-28 08:49:10.747386261 +0000 UTC m=+0.163262829 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-18T22:49:32Z, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, container_name=logrotate_crond, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12) Nov 28 03:49:10 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:49:10 localhost podman[93409]: 2025-11-28 08:49:10.799933446 +0000 UTC m=+0.088453223 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:49:10 localhost podman[93370]: 2025-11-28 08:49:10.841231487 +0000 UTC m=+0.246316446 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:49:10 localhost podman[93360]: 2025-11-28 08:49:10.889849343 +0000 UTC m=+0.305653999 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:49:10 localhost podman[93370]: 2025-11-28 08:49:10.900482788 +0000 UTC m=+0.305567737 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, tcib_managed=true, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 28 03:49:10 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:49:10 localhost podman[93360]: 2025-11-28 08:49:10.924500012 +0000 UTC m=+0.340304658 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true) Nov 28 03:49:10 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:49:11 localhost podman[93409]: 2025-11-28 08:49:11.224464966 +0000 UTC m=+0.512984733 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 28 03:49:11 localhost podman[93517]: Nov 28 03:49:11 localhost podman[93517]: 2025-11-28 08:49:11.234829863 +0000 UTC m=+0.076019114 container create 1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_sutherland, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, name=rhceph, release=553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64) Nov 28 03:49:11 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:49:11 localhost systemd[1]: Started libpod-conmon-1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f.scope. Nov 28 03:49:11 localhost systemd[1]: Started libcrun container. Nov 28 03:49:11 localhost podman[93517]: 2025-11-28 08:49:11.293383972 +0000 UTC m=+0.134573263 container init 1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_sutherland, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.expose-services=, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 03:49:11 localhost podman[93517]: 2025-11-28 08:49:11.202859995 +0000 UTC m=+0.044049256 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 03:49:11 localhost podman[93517]: 2025-11-28 08:49:11.306590375 +0000 UTC m=+0.147779656 container start 1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_sutherland, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, release=553, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 03:49:11 localhost podman[93517]: 2025-11-28 08:49:11.308944326 +0000 UTC m=+0.150133617 container attach 1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_sutherland, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=) Nov 28 03:49:11 localhost angry_sutherland[93532]: 167 167 Nov 28 03:49:11 localhost systemd[1]: libpod-1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f.scope: Deactivated successfully. Nov 28 03:49:11 localhost podman[93517]: 2025-11-28 08:49:11.313804955 +0000 UTC m=+0.154994246 container died 1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_sutherland, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , name=rhceph, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph) Nov 28 03:49:11 localhost podman[93538]: 2025-11-28 08:49:11.418184414 +0000 UTC m=+0.089168665 container remove 1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=angry_sutherland, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.component=rhceph-container, release=553, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 03:49:11 localhost systemd[1]: libpod-conmon-1dd1df3bc840dfca20bb34d3b19a63d370c34aa64b9d2a0a4d6370f3e9b8a44f.scope: Deactivated successfully. Nov 28 03:49:11 localhost podman[93560]: Nov 28 03:49:11 localhost podman[93560]: 2025-11-28 08:49:11.660175788 +0000 UTC m=+0.085994419 container create ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_perlman, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 03:49:11 localhost systemd[1]: Started libpod-conmon-ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6.scope. Nov 28 03:49:11 localhost podman[93560]: 2025-11-28 08:49:11.627710925 +0000 UTC m=+0.053529606 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 03:49:11 localhost systemd[1]: Started libcrun container. Nov 28 03:49:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c6b6c643f4130ffc3706e1d537273f2123af64f3aaeec13c9f99a6da6f1157/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 03:49:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c6b6c643f4130ffc3706e1d537273f2123af64f3aaeec13c9f99a6da6f1157/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 03:49:11 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2c6b6c643f4130ffc3706e1d537273f2123af64f3aaeec13c9f99a6da6f1157/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 03:49:11 localhost podman[93560]: 2025-11-28 08:49:11.7443595 +0000 UTC m=+0.170178141 container init ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_perlman, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=) Nov 28 03:49:11 localhost systemd[1]: tmp-crun.z63Cm4.mount: Deactivated successfully. Nov 28 03:49:11 localhost podman[93560]: 2025-11-28 08:49:11.760157402 +0000 UTC m=+0.185975993 container start ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_perlman, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, version=7, RELEASE=main, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 03:49:11 localhost podman[93560]: 2025-11-28 08:49:11.76039447 +0000 UTC m=+0.186213101 container attach ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_perlman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, version=7, name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 03:49:12 localhost wizardly_perlman[93576]: [ Nov 28 03:49:12 localhost wizardly_perlman[93576]: { Nov 28 03:49:12 localhost wizardly_perlman[93576]: "available": false, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "ceph_device": false, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "lsm_data": {}, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "lvs": [], Nov 28 03:49:12 localhost wizardly_perlman[93576]: "path": "/dev/sr0", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "rejected_reasons": [ Nov 28 03:49:12 localhost wizardly_perlman[93576]: "Insufficient space (<5GB)", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "Has a FileSystem" Nov 28 03:49:12 localhost wizardly_perlman[93576]: ], Nov 28 03:49:12 localhost wizardly_perlman[93576]: "sys_api": { Nov 28 03:49:12 localhost wizardly_perlman[93576]: "actuators": null, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "device_nodes": "sr0", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "human_readable_size": "482.00 KB", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "id_bus": "ata", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "model": "QEMU DVD-ROM", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "nr_requests": "2", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "partitions": {}, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "path": "/dev/sr0", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "removable": "1", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "rev": "2.5+", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "ro": "0", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "rotational": "1", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "sas_address": "", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "sas_device_handle": "", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "scheduler_mode": "mq-deadline", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "sectors": 0, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "sectorsize": "2048", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "size": 493568.0, Nov 28 03:49:12 localhost wizardly_perlman[93576]: "support_discard": "0", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "type": "disk", Nov 28 03:49:12 localhost wizardly_perlman[93576]: "vendor": "QEMU" Nov 28 03:49:12 localhost wizardly_perlman[93576]: } Nov 28 03:49:12 localhost wizardly_perlman[93576]: } Nov 28 03:49:12 localhost wizardly_perlman[93576]: ] Nov 28 03:49:12 localhost systemd[1]: libpod-ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6.scope: Deactivated successfully. Nov 28 03:49:12 localhost systemd[1]: libpod-ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6.scope: Consumed 1.048s CPU time. Nov 28 03:49:12 localhost podman[93560]: 2025-11-28 08:49:12.757155681 +0000 UTC m=+1.182974332 container died ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_perlman, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, version=7, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, release=553, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 03:49:12 localhost systemd[1]: tmp-crun.a12E0H.mount: Deactivated successfully. Nov 28 03:49:12 localhost systemd[1]: var-lib-containers-storage-overlay-e2c6b6c643f4130ffc3706e1d537273f2123af64f3aaeec13c9f99a6da6f1157-merged.mount: Deactivated successfully. Nov 28 03:49:12 localhost podman[95609]: 2025-11-28 08:49:12.866816101 +0000 UTC m=+0.096915011 container remove ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_perlman, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, release=553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, GIT_BRANCH=main, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12) Nov 28 03:49:12 localhost systemd[1]: libpod-conmon-ef7ad5e4773c66a693d67323a7d8be57322232f40de8e0c51e7534c66989eca6.scope: Deactivated successfully. Nov 28 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:49:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:49:15 localhost podman[95639]: 2025-11-28 08:49:15.980701604 +0000 UTC m=+0.083190733 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:49:16 localhost podman[95640]: 2025-11-28 08:49:16.02803114 +0000 UTC m=+0.132001283 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute) Nov 28 03:49:16 localhost podman[95639]: 2025-11-28 08:49:16.053037584 +0000 UTC m=+0.155526703 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 28 03:49:16 localhost podman[95640]: 2025-11-28 08:49:16.080028118 +0000 UTC m=+0.183998341 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, name=rhosp17/openstack-nova-compute, release=1761123044, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64) Nov 28 03:49:16 localhost podman[95641]: 2025-11-28 08:49:16.092190261 +0000 UTC m=+0.194899446 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true) Nov 28 03:49:16 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:49:16 localhost podman[95641]: 2025-11-28 08:49:16.104125945 +0000 UTC m=+0.206835180 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:49:16 localhost podman[95639]: unhealthy Nov 28 03:49:16 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:49:16 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:49:16 localhost podman[95641]: unhealthy Nov 28 03:49:16 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:49:16 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:49:16 localhost systemd[1]: tmp-crun.d2ri3Q.mount: Deactivated successfully. Nov 28 03:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:49:38 localhost podman[95701]: 2025-11-28 08:49:38.978186445 +0000 UTC m=+0.079750514 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:49:39 localhost podman[95701]: 2025-11-28 08:49:39.009675804 +0000 UTC m=+0.111239813 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, container_name=collectd, tcib_managed=true) Nov 28 03:49:39 localhost podman[95700]: 2025-11-28 08:49:39.023420366 +0000 UTC m=+0.127575975 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container) Nov 28 03:49:39 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:49:39 localhost podman[95700]: 2025-11-28 08:49:39.204037262 +0000 UTC m=+0.308192821 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Nov 28 03:49:39 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:49:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:49:40 localhost podman[95753]: 2025-11-28 08:49:40.991653384 +0000 UTC m=+0.097044146 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64) Nov 28 03:49:41 localhost podman[95753]: 2025-11-28 08:49:41.024155214 +0000 UTC m=+0.129545966 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:49:41 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:49:41 localhost podman[95752]: 2025-11-28 08:49:41.070993304 +0000 UTC m=+0.177891172 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1) Nov 28 03:49:41 localhost podman[95781]: 2025-11-28 08:49:41.12741892 +0000 UTC m=+0.129658849 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4) Nov 28 03:49:41 localhost podman[95781]: 2025-11-28 08:49:41.186671203 +0000 UTC m=+0.188911142 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:49:41 localhost podman[95783]: 2025-11-28 08:49:41.188604701 +0000 UTC m=+0.188974633 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc.) Nov 28 03:49:41 localhost podman[95752]: 2025-11-28 08:49:41.208675979 +0000 UTC m=+0.315573847 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, vcs-type=git, container_name=logrotate_crond) Nov 28 03:49:41 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:49:41 localhost podman[95783]: 2025-11-28 08:49:41.223229287 +0000 UTC m=+0.223599189 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute) Nov 28 03:49:41 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:49:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:49:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:49:41 localhost podman[95840]: 2025-11-28 08:49:41.338990957 +0000 UTC m=+0.078326920 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, version=17.1.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, architecture=x86_64, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 28 03:49:41 localhost podman[95840]: 2025-11-28 08:49:41.714516527 +0000 UTC m=+0.453852550 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:49:41 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:49:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:49:46 localhost podman[95865]: 2025-11-28 08:49:46.985633074 +0000 UTC m=+0.084608624 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 28 03:49:47 localhost podman[95865]: 2025-11-28 08:49:47.026020476 +0000 UTC m=+0.124996016 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true) Nov 28 03:49:47 localhost podman[95865]: unhealthy Nov 28 03:49:47 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:49:47 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:49:47 localhost systemd[1]: tmp-crun.w1QFGd.mount: Deactivated successfully. Nov 28 03:49:47 localhost podman[95866]: 2025-11-28 08:49:47.051661164 +0000 UTC m=+0.147300660 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, architecture=x86_64, vcs-type=git, release=1761123044, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1) Nov 28 03:49:47 localhost podman[95866]: 2025-11-28 08:49:47.075517049 +0000 UTC m=+0.171156545 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team) Nov 28 03:49:47 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:49:47 localhost podman[95867]: 2025-11-28 08:49:47.092962185 +0000 UTC m=+0.184852826 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 28 03:49:47 localhost podman[95867]: 2025-11-28 08:49:47.10937873 +0000 UTC m=+0.201269371 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 28 03:49:47 localhost podman[95867]: unhealthy Nov 28 03:49:47 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:49:47 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:50:09 localhost systemd[1]: tmp-crun.wLofg3.mount: Deactivated successfully. Nov 28 03:50:09 localhost podman[95930]: 2025-11-28 08:50:09.97627566 +0000 UTC m=+0.084244413 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:50:09 localhost podman[95930]: 2025-11-28 08:50:09.987390782 +0000 UTC m=+0.095359595 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:50:09 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:50:10 localhost podman[95929]: 2025-11-28 08:50:10.083398525 +0000 UTC m=+0.192230254 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:50:10 localhost podman[95929]: 2025-11-28 08:50:10.296462208 +0000 UTC m=+0.405293917 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:50:10 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:50:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:50:12 localhost podman[95985]: 2025-11-28 08:50:11.99854889 +0000 UTC m=+0.094159047 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:50:12 localhost podman[95978]: 2025-11-28 08:50:12.046761313 +0000 UTC m=+0.149155079 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, version=17.1.12) Nov 28 03:50:12 localhost podman[95978]: 2025-11-28 08:50:12.058415362 +0000 UTC m=+0.160809198 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid) Nov 28 03:50:12 localhost systemd[1]: tmp-crun.Wd0nEc.mount: Deactivated successfully. Nov 28 03:50:12 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:50:12 localhost podman[95976]: 2025-11-28 08:50:12.087933919 +0000 UTC m=+0.197428483 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, container_name=logrotate_crond, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64) Nov 28 03:50:12 localhost podman[95977]: 2025-11-28 08:50:12.104424306 +0000 UTC m=+0.207663967 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 28 03:50:12 localhost podman[95976]: 2025-11-28 08:50:12.118481769 +0000 UTC m=+0.227976353 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible) Nov 28 03:50:12 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:50:12 localhost podman[95977]: 2025-11-28 08:50:12.139392113 +0000 UTC m=+0.242631824 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Nov 28 03:50:12 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:50:12 localhost podman[95984]: 2025-11-28 08:50:12.163521414 +0000 UTC m=+0.260897855 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:50:12 localhost podman[95984]: 2025-11-28 08:50:12.215834863 +0000 UTC m=+0.313211394 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc.) Nov 28 03:50:12 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:50:12 localhost podman[95985]: 2025-11-28 08:50:12.385446861 +0000 UTC m=+0.481057028 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_migration_target) Nov 28 03:50:12 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:50:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:50:17 localhost systemd[1]: tmp-crun.mQfvhH.mount: Deactivated successfully. Nov 28 03:50:17 localhost podman[96219]: 2025-11-28 08:50:17.973884097 +0000 UTC m=+0.081892939 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:50:18 localhost podman[96220]: 2025-11-28 08:50:18.018458898 +0000 UTC m=+0.124748107 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5) Nov 28 03:50:18 localhost podman[96219]: 2025-11-28 08:50:18.041194107 +0000 UTC m=+0.149202959 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, batch=17.1_20251118.1, release=1761123044) Nov 28 03:50:18 localhost podman[96219]: unhealthy Nov 28 03:50:18 localhost podman[96220]: 2025-11-28 08:50:18.04941124 +0000 UTC m=+0.155700439 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:50:18 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:50:18 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:50:18 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:50:18 localhost podman[96221]: 2025-11-28 08:50:18.129584187 +0000 UTC m=+0.229651415 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, tcib_managed=true, architecture=x86_64, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:50:18 localhost podman[96221]: 2025-11-28 08:50:18.172643641 +0000 UTC m=+0.272710869 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:50:18 localhost podman[96221]: unhealthy Nov 28 03:50:18 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:50:18 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:50:33 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:50:33 localhost recover_tripleo_nova_virtqemud[96283]: 62642 Nov 28 03:50:33 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:50:33 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:50:41 localhost podman[96284]: 2025-11-28 08:50:41.000219771 +0000 UTC m=+0.103614427 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12) Nov 28 03:50:41 localhost podman[96285]: 2025-11-28 08:50:41.094547093 +0000 UTC m=+0.194154323 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, url=https://www.redhat.com) Nov 28 03:50:41 localhost podman[96285]: 2025-11-28 08:50:41.110257055 +0000 UTC m=+0.209864275 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:50:41 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:50:41 localhost podman[96284]: 2025-11-28 08:50:41.226011386 +0000 UTC m=+0.329406042 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.41.4) Nov 28 03:50:41 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:50:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:50:42 localhost podman[96335]: 2025-11-28 08:50:42.993684176 +0000 UTC m=+0.088589046 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true) Nov 28 03:50:43 localhost podman[96334]: 2025-11-28 08:50:43.047958505 +0000 UTC m=+0.146655741 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:50:43 localhost podman[96334]: 2025-11-28 08:50:43.086513041 +0000 UTC m=+0.185210257 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:50:43 localhost podman[96336]: 2025-11-28 08:50:43.095172077 +0000 UTC m=+0.188567750 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044) Nov 28 03:50:43 localhost podman[96335]: 2025-11-28 08:50:43.098561481 +0000 UTC m=+0.193466361 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:50:43 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:50:43 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:50:43 localhost podman[96332]: 2025-11-28 08:50:43.143357749 +0000 UTC m=+0.245377008 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:32Z, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 03:50:43 localhost podman[96333]: 2025-11-28 08:50:43.201955401 +0000 UTC m=+0.303924239 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:50:43 localhost podman[96333]: 2025-11-28 08:50:43.224556426 +0000 UTC m=+0.326525284 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:50:43 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:50:43 localhost podman[96332]: 2025-11-28 08:50:43.278229407 +0000 UTC m=+0.380248666 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1) Nov 28 03:50:43 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:50:43 localhost podman[96336]: 2025-11-28 08:50:43.438362692 +0000 UTC m=+0.531758345 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64) Nov 28 03:50:43 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:50:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:50:48 localhost systemd[1]: tmp-crun.a7PicO.mount: Deactivated successfully. Nov 28 03:50:49 localhost podman[96448]: 2025-11-28 08:50:48.99953727 +0000 UTC m=+0.102986848 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 03:50:49 localhost systemd[1]: tmp-crun.7EPVlV.mount: Deactivated successfully. Nov 28 03:50:49 localhost podman[96450]: 2025-11-28 08:50:49.045859585 +0000 UTC m=+0.141362168 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.) Nov 28 03:50:49 localhost podman[96449]: 2025-11-28 08:50:49.094807 +0000 UTC m=+0.193274496 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git) Nov 28 03:50:49 localhost podman[96450]: 2025-11-28 08:50:49.115148007 +0000 UTC m=+0.210650590 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:50:49 localhost podman[96450]: unhealthy Nov 28 03:50:49 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:50:49 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:50:49 localhost podman[96449]: 2025-11-28 08:50:49.150788532 +0000 UTC m=+0.249255938 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:50:49 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:50:49 localhost podman[96448]: 2025-11-28 08:50:49.170980374 +0000 UTC m=+0.274429942 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1761123044, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.) Nov 28 03:50:49 localhost podman[96448]: unhealthy Nov 28 03:50:49 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:50:49 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:51:11 localhost podman[96513]: 2025-11-28 08:51:11.979185457 +0000 UTC m=+0.090277957 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:51:12 localhost systemd[1]: tmp-crun.XF58oF.mount: Deactivated successfully. Nov 28 03:51:12 localhost podman[96514]: 2025-11-28 08:51:12.029889888 +0000 UTC m=+0.139049359 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc.) Nov 28 03:51:12 localhost podman[96514]: 2025-11-28 08:51:12.039006338 +0000 UTC m=+0.148165829 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12) Nov 28 03:51:12 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:51:12 localhost podman[96513]: 2025-11-28 08:51:12.18732458 +0000 UTC m=+0.298417080 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, architecture=x86_64, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 03:51:12 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:51:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:51:13 localhost podman[96563]: 2025-11-28 08:51:13.972853248 +0000 UTC m=+0.082540509 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true) Nov 28 03:51:14 localhost podman[96564]: 2025-11-28 08:51:14.026866949 +0000 UTC m=+0.134892920 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:51:14 localhost podman[96564]: 2025-11-28 08:51:14.083481481 +0000 UTC m=+0.191507522 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:51:14 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:51:14 localhost podman[96572]: 2025-11-28 08:51:14.138567635 +0000 UTC m=+0.238167707 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, container_name=nova_migration_target, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:51:14 localhost podman[96566]: 2025-11-28 08:51:14.088268257 +0000 UTC m=+0.189806597 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1) Nov 28 03:51:14 localhost podman[96565]: 2025-11-28 08:51:14.18848771 +0000 UTC m=+0.290769934 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc.) Nov 28 03:51:14 localhost podman[96563]: 2025-11-28 08:51:14.215921264 +0000 UTC m=+0.325608525 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, tcib_managed=true) Nov 28 03:51:14 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:51:14 localhost podman[96565]: 2025-11-28 08:51:14.2275002 +0000 UTC m=+0.329782404 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:51:14 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:51:14 localhost podman[96566]: 2025-11-28 08:51:14.273554056 +0000 UTC m=+0.375092416 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:11:48Z) Nov 28 03:51:14 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:51:14 localhost podman[96572]: 2025-11-28 08:51:14.536547455 +0000 UTC m=+0.636147577 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:51:14 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:51:19 localhost podman[96753]: 2025-11-28 08:51:19.98740558 +0000 UTC m=+0.082830129 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, distribution-scope=public, url=https://www.redhat.com) Nov 28 03:51:20 localhost podman[96751]: 2025-11-28 08:51:20.033381624 +0000 UTC m=+0.131985970 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:51:20 localhost podman[96752]: 2025-11-28 08:51:20.082090213 +0000 UTC m=+0.181539185 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public) Nov 28 03:51:20 localhost podman[96751]: 2025-11-28 08:51:20.103052727 +0000 UTC m=+0.201657133 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, container_name=ovn_controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team) Nov 28 03:51:20 localhost podman[96753]: 2025-11-28 08:51:20.10377319 +0000 UTC m=+0.199197739 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, container_name=ovn_metadata_agent, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:51:20 localhost podman[96753]: unhealthy Nov 28 03:51:20 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:51:20 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:51:20 localhost podman[96751]: unhealthy Nov 28 03:51:20 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:51:20 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:51:20 localhost podman[96752]: 2025-11-28 08:51:20.211434561 +0000 UTC m=+0.310883563 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute) Nov 28 03:51:20 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:51:42 localhost podman[96817]: 2025-11-28 08:51:42.984977058 +0000 UTC m=+0.086770840 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step3, name=rhosp17/openstack-collectd, release=1761123044, vcs-type=git, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:51:42 localhost podman[96817]: 2025-11-28 08:51:42.993007975 +0000 UTC m=+0.094801717 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:51:43 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:51:43 localhost systemd[1]: tmp-crun.xyESSM.mount: Deactivated successfully. Nov 28 03:51:43 localhost podman[96816]: 2025-11-28 08:51:43.036928497 +0000 UTC m=+0.141032229 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 28 03:51:43 localhost podman[96816]: 2025-11-28 08:51:43.207984908 +0000 UTC m=+0.312088590 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:51:43 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:51:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:51:44 localhost podman[96866]: 2025-11-28 08:51:44.987540183 +0000 UTC m=+0.090650199 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:51:45 localhost podman[96867]: 2025-11-28 08:51:45.037783898 +0000 UTC m=+0.138907082 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=iscsid, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:51:45 localhost podman[96866]: 2025-11-28 08:51:45.04661647 +0000 UTC m=+0.149726526 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:51:45 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:51:45 localhost podman[96865]: 2025-11-28 08:51:45.091631924 +0000 UTC m=+0.197354600 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20251118.1) Nov 28 03:51:45 localhost podman[96865]: 2025-11-28 08:51:45.100402085 +0000 UTC m=+0.206124761 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=logrotate_crond, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:51:45 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:51:45 localhost podman[96868]: 2025-11-28 08:51:45.149866965 +0000 UTC m=+0.248150012 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:51:45 localhost podman[96868]: 2025-11-28 08:51:45.184424869 +0000 UTC m=+0.282707946 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:51:45 localhost podman[96874]: 2025-11-28 08:51:45.210658235 +0000 UTC m=+0.301983778 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true) Nov 28 03:51:45 localhost podman[96867]: 2025-11-28 08:51:45.22381181 +0000 UTC m=+0.324934974 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:51:45 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:51:45 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:51:45 localhost podman[96874]: 2025-11-28 08:51:45.58894623 +0000 UTC m=+0.680271823 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:51:45 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:51:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:51:50 localhost podman[96981]: 2025-11-28 08:51:50.988164217 +0000 UTC m=+0.089592677 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:51:51 localhost podman[96981]: 2025-11-28 08:51:51.031718587 +0000 UTC m=+0.133147047 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:51:51 localhost podman[96981]: unhealthy Nov 28 03:51:51 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:51:51 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:51:51 localhost podman[96982]: 2025-11-28 08:51:51.052214747 +0000 UTC m=+0.151543892 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true) Nov 28 03:51:51 localhost podman[96983]: 2025-11-28 08:51:51.103546535 +0000 UTC m=+0.197033981 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, release=1761123044, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 28 03:51:51 localhost podman[96982]: 2025-11-28 08:51:51.133644001 +0000 UTC m=+0.232973156 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, container_name=nova_compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:36:58Z, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:51:51 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:51:51 localhost podman[96983]: 2025-11-28 08:51:51.150598633 +0000 UTC m=+0.244085969 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, release=1761123044, distribution-scope=public, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-type=git) Nov 28 03:51:51 localhost podman[96983]: unhealthy Nov 28 03:51:51 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:51:51 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:52:13 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:52:13 localhost recover_tripleo_nova_virtqemud[97055]: 62642 Nov 28 03:52:13 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:52:13 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:52:14 localhost podman[97048]: 2025-11-28 08:52:14.003216244 +0000 UTC m=+0.089969758 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git) Nov 28 03:52:14 localhost systemd[1]: tmp-crun.yHVDz7.mount: Deactivated successfully. Nov 28 03:52:14 localhost podman[97047]: 2025-11-28 08:52:14.057173673 +0000 UTC m=+0.148247020 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, release=1761123044, container_name=metrics_qdr, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 03:52:14 localhost podman[97048]: 2025-11-28 08:52:14.071844624 +0000 UTC m=+0.158598198 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=collectd, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:52:14 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:52:14 localhost podman[97047]: 2025-11-28 08:52:14.270528935 +0000 UTC m=+0.361602292 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd) Nov 28 03:52:14 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:52:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:52:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:52:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:52:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:52:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:52:15 localhost podman[97113]: 2025-11-28 08:52:15.991307101 +0000 UTC m=+0.083084235 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git) Nov 28 03:52:16 localhost podman[97099]: 2025-11-28 08:52:15.967141889 +0000 UTC m=+0.074884215 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:52:16 localhost systemd[1]: tmp-crun.R2bbVm.mount: Deactivated successfully. Nov 28 03:52:16 localhost podman[97099]: 2025-11-28 08:52:16.047653405 +0000 UTC m=+0.155395711 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, version=17.1.12) Nov 28 03:52:16 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:52:16 localhost podman[97112]: 2025-11-28 08:52:16.139695775 +0000 UTC m=+0.237807244 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:52:16 localhost podman[97100]: 2025-11-28 08:52:16.05498583 +0000 UTC m=+0.154803292 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, release=1761123044, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true) Nov 28 03:52:16 localhost podman[97100]: 2025-11-28 08:52:16.184146393 +0000 UTC m=+0.283963875 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Nov 28 03:52:16 localhost podman[97112]: 2025-11-28 08:52:16.193894243 +0000 UTC m=+0.292005652 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:52:16 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:52:16 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:52:16 localhost podman[97098]: 2025-11-28 08:52:16.249799793 +0000 UTC m=+0.357925610 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, container_name=logrotate_crond, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 28 03:52:16 localhost podman[97098]: 2025-11-28 08:52:16.284427727 +0000 UTC m=+0.392553474 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12) Nov 28 03:52:16 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:52:16 localhost podman[97113]: 2025-11-28 08:52:16.342427872 +0000 UTC m=+0.434205016 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:52:16 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:52:18 localhost systemd[1]: tmp-crun.iS5MDP.mount: Deactivated successfully. Nov 28 03:52:18 localhost podman[97308]: 2025-11-28 08:52:18.411349156 +0000 UTC m=+0.102885235 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, ceph=True, io.buildah.version=1.33.12, name=rhceph, build-date=2025-09-24T08:57:55, version=7, description=Red Hat Ceph Storage 7, architecture=x86_64, release=553, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph) Nov 28 03:52:18 localhost podman[97308]: 2025-11-28 08:52:18.515458829 +0000 UTC m=+0.206994898 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.buildah.version=1.33.12, vcs-type=git, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:52:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:52:21 localhost podman[97455]: 2025-11-28 08:52:21.985392315 +0000 UTC m=+0.084721016 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044) Nov 28 03:52:22 localhost podman[97454]: 2025-11-28 08:52:22.041256513 +0000 UTC m=+0.141847994 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible) Nov 28 03:52:22 localhost podman[97455]: 2025-11-28 08:52:22.093934333 +0000 UTC m=+0.193263004 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., container_name=nova_compute) Nov 28 03:52:22 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:52:22 localhost podman[97454]: 2025-11-28 08:52:22.110199924 +0000 UTC m=+0.210791395 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:52:22 localhost podman[97454]: unhealthy Nov 28 03:52:22 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:52:22 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:52:22 localhost podman[97456]: 2025-11-28 08:52:22.12144185 +0000 UTC m=+0.218645416 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, io.openshift.expose-services=) Nov 28 03:52:22 localhost podman[97456]: 2025-11-28 08:52:22.212579312 +0000 UTC m=+0.309782828 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:52:22 localhost podman[97456]: unhealthy Nov 28 03:52:22 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:52:22 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:52:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:52:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:52:44 localhost podman[97519]: 2025-11-28 08:52:44.988033121 +0000 UTC m=+0.095051134 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:52:45 localhost systemd[1]: tmp-crun.xeJn9w.mount: Deactivated successfully. Nov 28 03:52:45 localhost podman[97520]: 2025-11-28 08:52:45.029050383 +0000 UTC m=+0.133581279 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z) Nov 28 03:52:45 localhost podman[97520]: 2025-11-28 08:52:45.040443013 +0000 UTC m=+0.144973979 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:52:45 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:52:45 localhost podman[97519]: 2025-11-28 08:52:45.195837852 +0000 UTC m=+0.302855855 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr) Nov 28 03:52:45 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:52:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:52:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:52:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:52:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:52:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:52:46 localhost podman[97577]: 2025-11-28 08:52:46.989034807 +0000 UTC m=+0.077595528 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container) Nov 28 03:52:47 localhost podman[97569]: 2025-11-28 08:52:47.045414721 +0000 UTC m=+0.143480754 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:52:47 localhost podman[97571]: 2025-11-28 08:52:47.104982693 +0000 UTC m=+0.197074822 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, vcs-type=git) Nov 28 03:52:47 localhost podman[97569]: 2025-11-28 08:52:47.106474249 +0000 UTC m=+0.204540292 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:52:47 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:52:47 localhost podman[97571]: 2025-11-28 08:52:47.189545663 +0000 UTC m=+0.281637772 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:52:47 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:52:47 localhost podman[97570]: 2025-11-28 08:52:47.203948546 +0000 UTC m=+0.298024626 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3) Nov 28 03:52:47 localhost podman[97568]: 2025-11-28 08:52:47.159286972 +0000 UTC m=+0.260425800 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4) Nov 28 03:52:47 localhost podman[97568]: 2025-11-28 08:52:47.240280783 +0000 UTC m=+0.341419591 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:52:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:52:47 localhost podman[97570]: 2025-11-28 08:52:47.290585681 +0000 UTC m=+0.384661751 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, version=17.1.12, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1761123044) Nov 28 03:52:47 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:52:47 localhost podman[97577]: 2025-11-28 08:52:47.353554438 +0000 UTC m=+0.442115189 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:52:47 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:52:47 localhost systemd[1]: tmp-crun.5Sx5a0.mount: Deactivated successfully. Nov 28 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:52:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:52:52 localhost podman[97678]: 2025-11-28 08:52:52.997304525 +0000 UTC m=+0.100006566 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container) Nov 28 03:52:53 localhost podman[97678]: 2025-11-28 08:52:53.037854383 +0000 UTC m=+0.140556374 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:52:53 localhost podman[97680]: 2025-11-28 08:52:53.041826014 +0000 UTC m=+0.141475631 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, architecture=x86_64) Nov 28 03:52:53 localhost podman[97678]: unhealthy Nov 28 03:52:53 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:52:53 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:52:53 localhost systemd[1]: tmp-crun.T6iAaa.mount: Deactivated successfully. Nov 28 03:52:53 localhost podman[97680]: 2025-11-28 08:52:53.128579703 +0000 UTC m=+0.228229330 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 03:52:53 localhost podman[97680]: unhealthy Nov 28 03:52:53 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:52:53 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:52:53 localhost podman[97679]: 2025-11-28 08:52:53.096298611 +0000 UTC m=+0.198246949 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:52:53 localhost podman[97679]: 2025-11-28 08:52:53.17627345 +0000 UTC m=+0.278221798 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible) Nov 28 03:52:53 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:53:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:53:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:53:15 localhost podman[97743]: 2025-11-28 08:53:15.986617701 +0000 UTC m=+0.092256978 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:53:16 localhost podman[97744]: 2025-11-28 08:53:16.03538668 +0000 UTC m=+0.138073707 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible) Nov 28 03:53:16 localhost podman[97744]: 2025-11-28 08:53:16.071867372 +0000 UTC m=+0.174554419 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Nov 28 03:53:16 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:53:16 localhost podman[97743]: 2025-11-28 08:53:16.20152015 +0000 UTC m=+0.307159477 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:53:16 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:53:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:53:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:53:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:53:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:53:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:53:18 localhost systemd[1]: tmp-crun.UNLLMX.mount: Deactivated successfully. Nov 28 03:53:18 localhost podman[97798]: 2025-11-28 08:53:18.060933311 +0000 UTC m=+0.151160180 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, distribution-scope=public) Nov 28 03:53:18 localhost podman[97800]: 2025-11-28 08:53:18.018730773 +0000 UTC m=+0.103169714 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:53:18 localhost podman[97798]: 2025-11-28 08:53:18.097430113 +0000 UTC m=+0.187657032 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, version=17.1.12) Nov 28 03:53:18 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:53:18 localhost podman[97790]: 2025-11-28 08:53:18.113271861 +0000 UTC m=+0.215888571 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Nov 28 03:53:18 localhost podman[97790]: 2025-11-28 08:53:18.150708882 +0000 UTC m=+0.253325582 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, tcib_managed=true, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 28 03:53:18 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:53:18 localhost podman[97792]: 2025-11-28 08:53:18.195940133 +0000 UTC m=+0.290141554 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:53:18 localhost podman[97792]: 2025-11-28 08:53:18.208701786 +0000 UTC m=+0.302903257 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:53:18 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:53:18 localhost podman[97791]: 2025-11-28 08:53:18.260340814 +0000 UTC m=+0.355004299 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:53:18 localhost podman[97791]: 2025-11-28 08:53:18.31742657 +0000 UTC m=+0.412090085 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 03:53:18 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:53:18 localhost podman[97800]: 2025-11-28 08:53:18.375547367 +0000 UTC m=+0.459986358 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_migration_target, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4) Nov 28 03:53:18 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:53:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:53:23 localhost podman[97979]: 2025-11-28 08:53:23.993950758 +0000 UTC m=+0.096568301 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T23:34:05Z, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller) Nov 28 03:53:24 localhost systemd[1]: tmp-crun.iqQYFR.mount: Deactivated successfully. Nov 28 03:53:24 localhost podman[97980]: 2025-11-28 08:53:24.042127779 +0000 UTC m=+0.140554804 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, architecture=x86_64, container_name=nova_compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 03:53:24 localhost podman[97981]: 2025-11-28 08:53:24.100464483 +0000 UTC m=+0.195782072 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:53:24 localhost podman[97979]: 2025-11-28 08:53:24.113610188 +0000 UTC m=+0.216227711 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64) Nov 28 03:53:24 localhost podman[97979]: unhealthy Nov 28 03:53:24 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:53:24 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:53:24 localhost podman[97980]: 2025-11-28 08:53:24.16894058 +0000 UTC m=+0.267367625 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:53:24 localhost podman[97981]: 2025-11-28 08:53:24.169304841 +0000 UTC m=+0.264622440 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z) Nov 28 03:53:24 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:53:24 localhost podman[97981]: unhealthy Nov 28 03:53:24 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:53:24 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:53:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:53:46 localhost podman[98041]: 2025-11-28 08:53:46.970136847 +0000 UTC m=+0.077643109 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-collectd-container) Nov 28 03:53:46 localhost podman[98041]: 2025-11-28 08:53:46.984416386 +0000 UTC m=+0.091922608 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container) Nov 28 03:53:46 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:53:47 localhost systemd[1]: tmp-crun.KFTIO3.mount: Deactivated successfully. Nov 28 03:53:47 localhost podman[98040]: 2025-11-28 08:53:47.07948915 +0000 UTC m=+0.189948863 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, architecture=x86_64) Nov 28 03:53:47 localhost podman[98040]: 2025-11-28 08:53:47.267243104 +0000 UTC m=+0.377702857 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.) Nov 28 03:53:47 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:53:48 localhost podman[98090]: 2025-11-28 08:53:48.974176735 +0000 UTC m=+0.083394396 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team) Nov 28 03:53:48 localhost podman[98090]: 2025-11-28 08:53:48.982098369 +0000 UTC m=+0.091315950 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:53:48 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:53:49 localhost systemd[1]: tmp-crun.QxwICa.mount: Deactivated successfully. Nov 28 03:53:49 localhost podman[98091]: 2025-11-28 08:53:49.038706769 +0000 UTC m=+0.143472793 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12) Nov 28 03:53:49 localhost podman[98092]: 2025-11-28 08:53:49.078338488 +0000 UTC m=+0.180621125 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com) Nov 28 03:53:49 localhost podman[98092]: 2025-11-28 08:53:49.086477499 +0000 UTC m=+0.188760206 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1) Nov 28 03:53:49 localhost podman[98091]: 2025-11-28 08:53:49.093435013 +0000 UTC m=+0.198200967 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1) Nov 28 03:53:49 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:53:49 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:53:49 localhost podman[98102]: 2025-11-28 08:53:49.136108475 +0000 UTC m=+0.230423758 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Nov 28 03:53:49 localhost podman[98098]: 2025-11-28 08:53:49.19575389 +0000 UTC m=+0.294622233 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:53:49 localhost podman[98098]: 2025-11-28 08:53:49.221029057 +0000 UTC m=+0.319897380 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1) Nov 28 03:53:49 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:53:49 localhost podman[98102]: 2025-11-28 08:53:49.558454976 +0000 UTC m=+0.652770278 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64) Nov 28 03:53:49 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:53:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:53:54 localhost podman[98206]: 2025-11-28 08:53:54.987282844 +0000 UTC m=+0.090866427 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute) Nov 28 03:53:55 localhost systemd[1]: tmp-crun.mcD4Ua.mount: Deactivated successfully. Nov 28 03:53:55 localhost podman[98206]: 2025-11-28 08:53:55.04572594 +0000 UTC m=+0.149309523 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, architecture=x86_64, container_name=nova_compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:53:55 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:53:55 localhost podman[98207]: 2025-11-28 08:53:55.098210985 +0000 UTC m=+0.200506618 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z) Nov 28 03:53:55 localhost podman[98205]: 2025-11-28 08:53:55.051340933 +0000 UTC m=+0.155842404 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4) Nov 28 03:53:55 localhost podman[98205]: 2025-11-28 08:53:55.136050109 +0000 UTC m=+0.240551570 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, url=https://www.redhat.com, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true) Nov 28 03:53:55 localhost podman[98205]: unhealthy Nov 28 03:53:55 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:53:55 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:53:55 localhost podman[98207]: 2025-11-28 08:53:55.189971047 +0000 UTC m=+0.292266680 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, batch=17.1_20251118.1, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent) Nov 28 03:53:55 localhost podman[98207]: unhealthy Nov 28 03:53:55 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:53:55 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:54:13 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:54:13 localhost recover_tripleo_nova_virtqemud[98274]: 62642 Nov 28 03:54:13 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:54:13 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:54:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:54:18 localhost podman[98276]: 2025-11-28 08:54:18.009101058 +0000 UTC m=+0.116928387 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:54:18 localhost podman[98276]: 2025-11-28 08:54:18.017558269 +0000 UTC m=+0.125385628 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vcs-type=git, io.openshift.expose-services=) Nov 28 03:54:18 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:54:18 localhost podman[98275]: 2025-11-28 08:54:18.063503831 +0000 UTC m=+0.171105563 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=metrics_qdr, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1) Nov 28 03:54:18 localhost podman[98275]: 2025-11-28 08:54:18.27968547 +0000 UTC m=+0.387287182 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, architecture=x86_64, maintainer=OpenStack TripleO Team) Nov 28 03:54:18 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:54:19 localhost systemd[1]: tmp-crun.mQTKrY.mount: Deactivated successfully. Nov 28 03:54:19 localhost podman[98325]: 2025-11-28 08:54:19.986299241 +0000 UTC m=+0.085420367 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4) Nov 28 03:54:20 localhost podman[98325]: 2025-11-28 08:54:20.030312935 +0000 UTC m=+0.129434071 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi) Nov 28 03:54:20 localhost podman[98324]: 2025-11-28 08:54:20.03795217 +0000 UTC m=+0.143291807 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, container_name=logrotate_crond, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:54:20 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:54:20 localhost podman[98332]: 2025-11-28 08:54:20.046221005 +0000 UTC m=+0.135942453 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1761123044, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:54:20 localhost podman[98334]: 2025-11-28 08:54:19.968141183 +0000 UTC m=+0.061581065 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:54:20 localhost podman[98326]: 2025-11-28 08:54:20.093272352 +0000 UTC m=+0.190045426 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vcs-type=git, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:54:20 localhost podman[98324]: 2025-11-28 08:54:20.101957009 +0000 UTC m=+0.207296626 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, container_name=logrotate_crond, tcib_managed=true, name=rhosp17/openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:32Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git) Nov 28 03:54:20 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:54:20 localhost podman[98326]: 2025-11-28 08:54:20.128578277 +0000 UTC m=+0.225351301 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, container_name=iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:54:20 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:54:20 localhost podman[98332]: 2025-11-28 08:54:20.17971876 +0000 UTC m=+0.269440198 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, vcs-type=git, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team) Nov 28 03:54:20 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:54:20 localhost podman[98334]: 2025-11-28 08:54:20.301536818 +0000 UTC m=+0.394976760 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute) Nov 28 03:54:20 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:54:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:54:25 localhost systemd[1]: tmp-crun.tXS3NH.mount: Deactivated successfully. Nov 28 03:54:26 localhost podman[98513]: 2025-11-28 08:54:26.016336861 +0000 UTC m=+0.108322413 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, version=17.1.12, tcib_managed=true, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, batch=17.1_20251118.1) Nov 28 03:54:26 localhost podman[98511]: 2025-11-28 08:54:26.032819198 +0000 UTC m=+0.133334272 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4) Nov 28 03:54:26 localhost podman[98512]: 2025-11-28 08:54:25.985981787 +0000 UTC m=+0.087456770 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container) Nov 28 03:54:26 localhost podman[98511]: 2025-11-28 08:54:26.04360176 +0000 UTC m=+0.144116904 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Nov 28 03:54:26 localhost podman[98511]: unhealthy Nov 28 03:54:26 localhost podman[98513]: 2025-11-28 08:54:26.055212107 +0000 UTC m=+0.147197649 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z) Nov 28 03:54:26 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:54:26 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:54:26 localhost podman[98513]: unhealthy Nov 28 03:54:26 localhost podman[98512]: 2025-11-28 08:54:26.066142463 +0000 UTC m=+0.167617476 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64) Nov 28 03:54:26 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:54:26 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:54:26 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:54:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:54:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:54:48 localhost podman[98574]: 2025-11-28 08:54:48.969024999 +0000 UTC m=+0.073177431 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, version=17.1.12) Nov 28 03:54:48 localhost podman[98574]: 2025-11-28 08:54:48.978629655 +0000 UTC m=+0.082782107 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git) Nov 28 03:54:48 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:54:49 localhost podman[98573]: 2025-11-28 08:54:49.071807481 +0000 UTC m=+0.183283839 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd) Nov 28 03:54:49 localhost podman[98573]: 2025-11-28 08:54:49.265955212 +0000 UTC m=+0.377431610 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 28 03:54:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:54:50 localhost podman[98625]: 2025-11-28 08:54:50.987990418 +0000 UTC m=+0.089054711 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:54:51 localhost podman[98626]: 2025-11-28 08:54:51.042103662 +0000 UTC m=+0.139721499 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.12, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 28 03:54:51 localhost podman[98625]: 2025-11-28 08:54:51.04759069 +0000 UTC m=+0.148654953 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:54:51 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:54:51 localhost podman[98622]: 2025-11-28 08:54:51.100390445 +0000 UTC m=+0.202191100 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Nov 28 03:54:51 localhost podman[98622]: 2025-11-28 08:54:51.141572682 +0000 UTC m=+0.243373357 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:54:51 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:54:51 localhost podman[98624]: 2025-11-28 08:54:51.202498405 +0000 UTC m=+0.304818126 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 03:54:51 localhost podman[98623]: 2025-11-28 08:54:51.173437151 +0000 UTC m=+0.279377024 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:54:51 localhost podman[98624]: 2025-11-28 08:54:51.243555338 +0000 UTC m=+0.345875019 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:54:51 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:54:51 localhost podman[98623]: 2025-11-28 08:54:51.256399993 +0000 UTC m=+0.362339836 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=) Nov 28 03:54:51 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:54:51 localhost podman[98626]: 2025-11-28 08:54:51.443666423 +0000 UTC m=+0.541284260 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true) Nov 28 03:54:51 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:54:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:54:56 localhost systemd[1]: tmp-crun.nX0E7M.mount: Deactivated successfully. Nov 28 03:54:56 localhost podman[98732]: 2025-11-28 08:54:56.992606795 +0000 UTC m=+0.091308979 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 28 03:54:57 localhost podman[98732]: 2025-11-28 08:54:57.03957735 +0000 UTC m=+0.138279524 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:54:57 localhost podman[98732]: unhealthy Nov 28 03:54:57 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:54:57 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:54:57 localhost podman[98734]: 2025-11-28 08:54:57.056963484 +0000 UTC m=+0.151014325 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:54:57 localhost podman[98734]: 2025-11-28 08:54:57.107007114 +0000 UTC m=+0.201057925 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 03:54:57 localhost podman[98734]: unhealthy Nov 28 03:54:57 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:54:57 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:54:57 localhost podman[98733]: 2025-11-28 08:54:57.111164282 +0000 UTC m=+0.206359358 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container) Nov 28 03:54:57 localhost podman[98733]: 2025-11-28 08:54:57.19564096 +0000 UTC m=+0.290836056 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute) Nov 28 03:54:57 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:55:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:55:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:55:19 localhost podman[98798]: 2025-11-28 08:55:19.990627657 +0000 UTC m=+0.077195456 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, vcs-type=git, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 28 03:55:20 localhost podman[98799]: 2025-11-28 08:55:20.04924399 +0000 UTC m=+0.132923750 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:55:20 localhost podman[98799]: 2025-11-28 08:55:20.059373511 +0000 UTC m=+0.143053241 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:55:20 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:55:20 localhost podman[98798]: 2025-11-28 08:55:20.208232079 +0000 UTC m=+0.294799938 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team) Nov 28 03:55:20 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:55:21 localhost podman[98848]: 2025-11-28 08:55:21.981454339 +0000 UTC m=+0.087326367 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:55:22 localhost systemd[1]: tmp-crun.rkGrsR.mount: Deactivated successfully. Nov 28 03:55:22 localhost podman[98849]: 2025-11-28 08:55:22.02665906 +0000 UTC m=+0.128842103 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git) Nov 28 03:55:22 localhost podman[98850]: 2025-11-28 08:55:22.053015171 +0000 UTC m=+0.150191441 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, tcib_managed=true, io.openshift.expose-services=) Nov 28 03:55:22 localhost podman[98849]: 2025-11-28 08:55:22.084686875 +0000 UTC m=+0.186869928 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, tcib_managed=true, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git) Nov 28 03:55:22 localhost podman[98850]: 2025-11-28 08:55:22.085288834 +0000 UTC m=+0.182465044 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-iscsid-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step3, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, tcib_managed=true) Nov 28 03:55:22 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:55:22 localhost podman[98851]: 2025-11-28 08:55:22.096188288 +0000 UTC m=+0.189610582 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git) Nov 28 03:55:22 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:55:22 localhost podman[98857]: 2025-11-28 08:55:22.15183878 +0000 UTC m=+0.243830031 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com) Nov 28 03:55:22 localhost podman[98851]: 2025-11-28 08:55:22.159528427 +0000 UTC m=+0.252950771 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:55:22 localhost podman[98848]: 2025-11-28 08:55:22.166446349 +0000 UTC m=+0.272318397 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, distribution-scope=public, release=1761123044, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64) Nov 28 03:55:22 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:55:22 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:55:22 localhost podman[98857]: 2025-11-28 08:55:22.522391958 +0000 UTC m=+0.614383119 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com) Nov 28 03:55:22 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:55:22 localhost systemd[1]: tmp-crun.5xsfxU.mount: Deactivated successfully. Nov 28 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:55:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:55:27 localhost podman[99036]: 2025-11-28 08:55:27.983718664 +0000 UTC m=+0.086696637 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5) Nov 28 03:55:28 localhost podman[99035]: 2025-11-28 08:55:28.035705744 +0000 UTC m=+0.139497862 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:55:28 localhost podman[99036]: 2025-11-28 08:55:28.046671101 +0000 UTC m=+0.149649044 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12) Nov 28 03:55:28 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:55:28 localhost podman[99035]: 2025-11-28 08:55:28.079937643 +0000 UTC m=+0.183729741 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, architecture=x86_64) Nov 28 03:55:28 localhost podman[99035]: unhealthy Nov 28 03:55:28 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:55:28 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:55:28 localhost podman[99037]: 2025-11-28 08:55:28.096381689 +0000 UTC m=+0.194185303 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4) Nov 28 03:55:28 localhost podman[99037]: 2025-11-28 08:55:28.114421695 +0000 UTC m=+0.212225259 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent) Nov 28 03:55:28 localhost podman[99037]: unhealthy Nov 28 03:55:28 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:55:28 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:55:28 localhost systemd[1]: tmp-crun.ZIoVEc.mount: Deactivated successfully. Nov 28 03:55:30 localhost sshd[99098]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:55:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:55:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:55:50 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:55:50 localhost recover_tripleo_nova_virtqemud[99102]: 62642 Nov 28 03:55:50 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:55:50 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:55:50 localhost podman[99099]: 2025-11-28 08:55:50.993709115 +0000 UTC m=+0.089557836 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:55:51 localhost systemd[1]: tmp-crun.vrZF4I.mount: Deactivated successfully. Nov 28 03:55:51 localhost podman[99100]: 2025-11-28 08:55:51.048261342 +0000 UTC m=+0.145266039 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, vcs-type=git, version=17.1.12, io.openshift.expose-services=, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team) Nov 28 03:55:51 localhost podman[99100]: 2025-11-28 08:55:51.083412274 +0000 UTC m=+0.180417021 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, release=1761123044, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:55:51 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:55:51 localhost podman[99099]: 2025-11-28 08:55:51.187728082 +0000 UTC m=+0.283576833 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:55:51 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:55:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:55:52 localhost systemd[1]: tmp-crun.tQrJmJ.mount: Deactivated successfully. Nov 28 03:55:53 localhost podman[99152]: 2025-11-28 08:55:52.999175968 +0000 UTC m=+0.096617173 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Nov 28 03:55:53 localhost podman[99152]: 2025-11-28 08:55:53.034400451 +0000 UTC m=+0.131841476 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4) Nov 28 03:55:53 localhost systemd[1]: tmp-crun.LJfw7Z.mount: Deactivated successfully. Nov 28 03:55:53 localhost podman[99158]: 2025-11-28 08:55:53.04930378 +0000 UTC m=+0.142259027 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=) Nov 28 03:55:53 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:55:53 localhost podman[99151]: 2025-11-28 08:55:53.091461687 +0000 UTC m=+0.189043427 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 03:55:53 localhost podman[99151]: 2025-11-28 08:55:53.104388584 +0000 UTC m=+0.201970314 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid) Nov 28 03:55:53 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:55:53 localhost podman[99150]: 2025-11-28 08:55:53.133215481 +0000 UTC m=+0.234690110 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:55:53 localhost podman[99149]: 2025-11-28 08:55:53.179607207 +0000 UTC m=+0.284590954 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git) Nov 28 03:55:53 localhost podman[99150]: 2025-11-28 08:55:53.207791304 +0000 UTC m=+0.309265963 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 03:55:53 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:55:53 localhost podman[99149]: 2025-11-28 08:55:53.263105116 +0000 UTC m=+0.368088923 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible) Nov 28 03:55:53 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:55:53 localhost podman[99158]: 2025-11-28 08:55:53.365561186 +0000 UTC m=+0.458516423 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64) Nov 28 03:55:53 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:55:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:55:58 localhost podman[99260]: 2025-11-28 08:55:58.977659103 +0000 UTC m=+0.085185562 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:55:58 localhost podman[99260]: 2025-11-28 08:55:58.99642201 +0000 UTC m=+0.103948429 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:55:59 localhost podman[99260]: unhealthy Nov 28 03:55:59 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:55:59 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:55:59 localhost podman[99262]: 2025-11-28 08:55:59.073515131 +0000 UTC m=+0.174517518 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:55:59 localhost podman[99262]: 2025-11-28 08:55:59.087559223 +0000 UTC m=+0.188561670 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, tcib_managed=true) Nov 28 03:55:59 localhost podman[99262]: unhealthy Nov 28 03:55:59 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:55:59 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:55:59 localhost podman[99261]: 2025-11-28 08:55:59.137430647 +0000 UTC m=+0.241021734 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container) Nov 28 03:55:59 localhost podman[99261]: 2025-11-28 08:55:59.190054625 +0000 UTC m=+0.293645672 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com) Nov 28 03:55:59 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:56:15 localhost systemd-logind[763]: Session 28 logged out. Waiting for processes to exit. Nov 28 03:56:15 localhost systemd[1]: session-28.scope: Deactivated successfully. Nov 28 03:56:15 localhost systemd[1]: session-28.scope: Consumed 7min 12.106s CPU time. Nov 28 03:56:15 localhost systemd-logind[763]: Removed session 28. Nov 28 03:56:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:56:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:56:21 localhost podman[99326]: 2025-11-28 08:56:21.984867717 +0000 UTC m=+0.090897257 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:56:22 localhost systemd[1]: tmp-crun.8uG734.mount: Deactivated successfully. Nov 28 03:56:22 localhost podman[99327]: 2025-11-28 08:56:22.037401822 +0000 UTC m=+0.139761220 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:56:22 localhost podman[99327]: 2025-11-28 08:56:22.047323777 +0000 UTC m=+0.149683165 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4) Nov 28 03:56:22 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:56:22 localhost podman[99326]: 2025-11-28 08:56:22.224383763 +0000 UTC m=+0.330413373 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1) Nov 28 03:56:22 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:56:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:56:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:56:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:56:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:56:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:56:23 localhost podman[99376]: 2025-11-28 08:56:23.98159264 +0000 UTC m=+0.079838036 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12) Nov 28 03:56:23 localhost podman[99378]: 2025-11-28 08:56:23.998963425 +0000 UTC m=+0.087520533 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4) Nov 28 03:56:24 localhost podman[99376]: 2025-11-28 08:56:24.009392745 +0000 UTC m=+0.107638142 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 03:56:24 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:56:24 localhost podman[99389]: 2025-11-28 08:56:24.111887508 +0000 UTC m=+0.195014359 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:56:24 localhost podman[99378]: 2025-11-28 08:56:24.131145271 +0000 UTC m=+0.219702419 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute) Nov 28 03:56:24 localhost podman[99375]: 2025-11-28 08:56:24.079254084 +0000 UTC m=+0.178003725 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:56:24 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:56:24 localhost podman[99377]: 2025-11-28 08:56:24.061139928 +0000 UTC m=+0.151603854 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, url=https://www.redhat.com, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.expose-services=) Nov 28 03:56:24 localhost podman[99377]: 2025-11-28 08:56:24.195529631 +0000 UTC m=+0.285993557 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, version=17.1.12, com.redhat.component=openstack-iscsid-container, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z) Nov 28 03:56:24 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:56:24 localhost podman[99375]: 2025-11-28 08:56:24.212610516 +0000 UTC m=+0.311360157 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, tcib_managed=true, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron) Nov 28 03:56:24 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:56:24 localhost podman[99389]: 2025-11-28 08:56:24.512579043 +0000 UTC m=+0.595705894 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vcs-type=git, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com) Nov 28 03:56:24 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:56:25 localhost systemd[1]: Stopping User Manager for UID 1003... Nov 28 03:56:25 localhost systemd[36764]: Activating special unit Exit the Session... Nov 28 03:56:25 localhost systemd[36764]: Removed slice User Background Tasks Slice. Nov 28 03:56:25 localhost systemd[36764]: Stopped target Main User Target. Nov 28 03:56:25 localhost systemd[36764]: Stopped target Basic System. Nov 28 03:56:25 localhost systemd[36764]: Stopped target Paths. Nov 28 03:56:25 localhost systemd[36764]: Stopped target Sockets. Nov 28 03:56:25 localhost systemd[36764]: Stopped target Timers. Nov 28 03:56:25 localhost systemd[36764]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 28 03:56:25 localhost systemd[36764]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 03:56:25 localhost systemd[36764]: Closed D-Bus User Message Bus Socket. Nov 28 03:56:25 localhost systemd[36764]: Stopped Create User's Volatile Files and Directories. Nov 28 03:56:25 localhost systemd[36764]: Removed slice User Application Slice. Nov 28 03:56:25 localhost systemd[36764]: Reached target Shutdown. Nov 28 03:56:25 localhost systemd[36764]: Finished Exit the Session. Nov 28 03:56:25 localhost systemd[36764]: Reached target Exit the Session. Nov 28 03:56:25 localhost systemd[1]: user@1003.service: Deactivated successfully. Nov 28 03:56:25 localhost systemd[1]: Stopped User Manager for UID 1003. Nov 28 03:56:25 localhost systemd[1]: user@1003.service: Consumed 4.603s CPU time, read 0B from disk, written 7.0K to disk. Nov 28 03:56:26 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Nov 28 03:56:26 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Nov 28 03:56:26 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Nov 28 03:56:26 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Nov 28 03:56:26 localhost systemd[1]: Removed slice User Slice of UID 1003. Nov 28 03:56:26 localhost systemd[1]: user-1003.slice: Consumed 7min 16.736s CPU time. Nov 28 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:56:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:56:29 localhost podman[99568]: 2025-11-28 08:56:29.972667061 +0000 UTC m=+0.080089754 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64) Nov 28 03:56:29 localhost podman[99568]: 2025-11-28 08:56:29.99246961 +0000 UTC m=+0.099892313 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, container_name=ovn_controller) Nov 28 03:56:29 localhost podman[99568]: unhealthy Nov 28 03:56:30 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:56:30 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:56:30 localhost podman[99569]: 2025-11-28 08:56:30.082499129 +0000 UTC m=+0.186776446 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, tcib_managed=true) Nov 28 03:56:30 localhost podman[99569]: 2025-11-28 08:56:30.114477754 +0000 UTC m=+0.218755131 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step5, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:56:30 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:56:30 localhost podman[99570]: 2025-11-28 08:56:30.13419981 +0000 UTC m=+0.234859875 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 03:56:30 localhost podman[99570]: 2025-11-28 08:56:30.154383231 +0000 UTC m=+0.255043306 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4) Nov 28 03:56:30 localhost podman[99570]: unhealthy Nov 28 03:56:30 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:56:30 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:56:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:56:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:56:52 localhost podman[99631]: 2025-11-28 08:56:52.974920744 +0000 UTC m=+0.082103226 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:56:53 localhost podman[99632]: 2025-11-28 08:56:53.029708158 +0000 UTC m=+0.128806282 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:56:53 localhost podman[99632]: 2025-11-28 08:56:53.043394009 +0000 UTC m=+0.142492113 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd) Nov 28 03:56:53 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:56:53 localhost podman[99631]: 2025-11-28 08:56:53.164494305 +0000 UTC m=+0.271676787 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:56:53 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:56:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:56:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:56:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:56:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:56:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:56:54 localhost podman[99681]: 2025-11-28 08:56:54.983773422 +0000 UTC m=+0.084381906 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Nov 28 03:56:54 localhost podman[99681]: 2025-11-28 08:56:54.995386069 +0000 UTC m=+0.095994613 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64) Nov 28 03:56:55 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:56:55 localhost podman[99685]: 2025-11-28 08:56:55.047620536 +0000 UTC m=+0.137661235 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:56:55 localhost podman[99682]: 2025-11-28 08:56:55.094650823 +0000 UTC m=+0.188988544 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:56:55 localhost podman[99685]: 2025-11-28 08:56:55.103414832 +0000 UTC m=+0.193455491 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:56:55 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:56:55 localhost podman[99682]: 2025-11-28 08:56:55.14954097 +0000 UTC m=+0.243878761 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:56:55 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:56:55 localhost podman[99690]: 2025-11-28 08:56:55.201780037 +0000 UTC m=+0.287944217 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public) Nov 28 03:56:55 localhost podman[99683]: 2025-11-28 08:56:55.155810123 +0000 UTC m=+0.247835643 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 03:56:55 localhost podman[99683]: 2025-11-28 08:56:55.240734385 +0000 UTC m=+0.332759865 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=) Nov 28 03:56:55 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:56:55 localhost podman[99690]: 2025-11-28 08:56:55.578755891 +0000 UTC m=+0.664920091 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_migration_target, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:56:55 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:56:57 localhost sshd[99794]: main: sshd: ssh-rsa algorithm is disabled Nov 28 03:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:57:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:57:00 localhost podman[99796]: 2025-11-28 08:57:00.975843034 +0000 UTC m=+0.082176628 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 28 03:57:00 localhost podman[99796]: 2025-11-28 08:57:00.991369292 +0000 UTC m=+0.097702886 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T23:34:05Z, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:57:00 localhost podman[99796]: unhealthy Nov 28 03:57:01 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:57:01 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:57:01 localhost podman[99797]: 2025-11-28 08:57:01.081171983 +0000 UTC m=+0.185648691 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:57:01 localhost podman[99798]: 2025-11-28 08:57:01.136004119 +0000 UTC m=+0.236573976 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn) Nov 28 03:57:01 localhost podman[99798]: 2025-11-28 08:57:01.155451447 +0000 UTC m=+0.256021274 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc.) Nov 28 03:57:01 localhost podman[99798]: unhealthy Nov 28 03:57:01 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:57:01 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:57:01 localhost podman[99797]: 2025-11-28 08:57:01.188509684 +0000 UTC m=+0.292986402 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:57:01 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:57:23 localhost podman[99863]: 2025-11-28 08:57:23.996145588 +0000 UTC m=+0.096832928 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, architecture=x86_64) Nov 28 03:57:24 localhost podman[99864]: 2025-11-28 08:57:24.051114359 +0000 UTC m=+0.151824950 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z) Nov 28 03:57:24 localhost podman[99864]: 2025-11-28 08:57:24.061736966 +0000 UTC m=+0.162447547 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 03:57:24 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:57:24 localhost podman[99863]: 2025-11-28 08:57:24.19940559 +0000 UTC m=+0.300092900 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:57:24 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:57:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:57:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:57:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:57:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:57:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:57:25 localhost podman[99912]: 2025-11-28 08:57:25.988330904 +0000 UTC m=+0.090354110 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, tcib_managed=true) Nov 28 03:57:26 localhost podman[99913]: 2025-11-28 08:57:26.091706152 +0000 UTC m=+0.192848531 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d) Nov 28 03:57:26 localhost podman[99913]: 2025-11-28 08:57:26.098200943 +0000 UTC m=+0.199343342 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, version=17.1.12) Nov 28 03:57:26 localhost podman[99918]: 2025-11-28 08:57:26.058671997 +0000 UTC m=+0.153371898 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute) Nov 28 03:57:26 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:57:26 localhost podman[99912]: 2025-11-28 08:57:26.117432625 +0000 UTC m=+0.219455891 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 03:57:26 localhost podman[99911]: 2025-11-28 08:57:26.026218259 +0000 UTC m=+0.135624142 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron) Nov 28 03:57:26 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:57:26 localhost podman[99918]: 2025-11-28 08:57:26.144422694 +0000 UTC m=+0.239122525 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:57:26 localhost podman[99920]: 2025-11-28 08:57:26.101264967 +0000 UTC m=+0.192724828 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=nova_migration_target, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 03:57:26 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:57:26 localhost podman[99911]: 2025-11-28 08:57:26.158019972 +0000 UTC m=+0.267425835 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible) Nov 28 03:57:26 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:57:26 localhost podman[99920]: 2025-11-28 08:57:26.449363833 +0000 UTC m=+0.540823714 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target) Nov 28 03:57:26 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:57:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:57:31 localhost podman[100102]: 2025-11-28 08:57:31.973822322 +0000 UTC m=+0.074969167 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z) Nov 28 03:57:32 localhost podman[100103]: 2025-11-28 08:57:32.033812968 +0000 UTC m=+0.134192479 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64) Nov 28 03:57:32 localhost podman[100102]: 2025-11-28 08:57:32.050266163 +0000 UTC m=+0.151413008 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 03:57:32 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:57:32 localhost podman[100103]: 2025-11-28 08:57:32.070273149 +0000 UTC m=+0.170652670 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, container_name=ovn_metadata_agent) Nov 28 03:57:32 localhost podman[100103]: unhealthy Nov 28 03:57:32 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:57:32 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:57:32 localhost podman[100101]: 2025-11-28 08:57:31.954753095 +0000 UTC m=+0.062190243 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:57:32 localhost podman[100101]: 2025-11-28 08:57:32.135598998 +0000 UTC m=+0.243036166 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z) Nov 28 03:57:32 localhost podman[100101]: unhealthy Nov 28 03:57:32 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:57:32 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:57:53 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:57:53 localhost recover_tripleo_nova_virtqemud[100166]: 62642 Nov 28 03:57:53 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:57:53 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:57:54 localhost podman[100167]: 2025-11-28 08:57:54.980999047 +0000 UTC m=+0.087588435 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-qdrouterd, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 03:57:55 localhost podman[100168]: 2025-11-28 08:57:55.039544027 +0000 UTC m=+0.143679549 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, url=https://www.redhat.com) Nov 28 03:57:55 localhost podman[100168]: 2025-11-28 08:57:55.049344028 +0000 UTC m=+0.153479570 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd) Nov 28 03:57:55 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:57:55 localhost podman[100167]: 2025-11-28 08:57:55.184504616 +0000 UTC m=+0.291093984 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:57:55 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:57:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:57:57 localhost podman[100218]: 2025-11-28 08:57:57.022995744 +0000 UTC m=+0.123874761 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-cron, url=https://www.redhat.com, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12) Nov 28 03:57:57 localhost systemd[1]: tmp-crun.RKopZg.mount: Deactivated successfully. Nov 28 03:57:57 localhost podman[100220]: 2025-11-28 08:57:57.040502702 +0000 UTC m=+0.134923391 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044) Nov 28 03:57:57 localhost podman[100218]: 2025-11-28 08:57:57.061738465 +0000 UTC m=+0.162617442 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=logrotate_crond, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:57:57 localhost podman[100227]: 2025-11-28 08:57:57.108157273 +0000 UTC m=+0.195875986 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:57:57 localhost podman[100219]: 2025-11-28 08:57:57.144924233 +0000 UTC m=+0.242703505 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible) Nov 28 03:57:57 localhost podman[100220]: 2025-11-28 08:57:57.176412213 +0000 UTC m=+0.270832922 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git) Nov 28 03:57:57 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:57:57 localhost podman[100219]: 2025-11-28 08:57:57.196248083 +0000 UTC m=+0.294027355 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1) Nov 28 03:57:57 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:57:57 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:57:57 localhost podman[100221]: 2025-11-28 08:57:57.25698988 +0000 UTC m=+0.345777886 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, release=1761123044, batch=17.1_20251118.1, vcs-type=git) Nov 28 03:57:57 localhost podman[100221]: 2025-11-28 08:57:57.314044686 +0000 UTC m=+0.402832692 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:57:57 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:57:57 localhost podman[100227]: 2025-11-28 08:57:57.467506406 +0000 UTC m=+0.555225129 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:57:57 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:58:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:58:02 localhost podman[100328]: 2025-11-28 08:58:02.984403992 +0000 UTC m=+0.091354501 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 03:58:03 localhost podman[100329]: 2025-11-28 08:58:03.036904907 +0000 UTC m=+0.141077000 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-type=git, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:58:03 localhost systemd[1]: tmp-crun.mPQFVt.mount: Deactivated successfully. Nov 28 03:58:03 localhost podman[100330]: 2025-11-28 08:58:03.090781004 +0000 UTC m=+0.190720117 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4) Nov 28 03:58:03 localhost podman[100329]: 2025-11-28 08:58:03.097318905 +0000 UTC m=+0.201490958 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, config_id=tripleo_step5, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Nov 28 03:58:03 localhost podman[100328]: 2025-11-28 08:58:03.107778467 +0000 UTC m=+0.214728976 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4) Nov 28 03:58:03 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:58:03 localhost podman[100328]: unhealthy Nov 28 03:58:03 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:58:03 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:58:03 localhost podman[100330]: 2025-11-28 08:58:03.159054174 +0000 UTC m=+0.258993277 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 03:58:03 localhost podman[100330]: unhealthy Nov 28 03:58:03 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:58:03 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:58:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:58:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:58:25 localhost systemd[1]: tmp-crun.ZdkgZn.mount: Deactivated successfully. Nov 28 03:58:25 localhost podman[100394]: 2025-11-28 08:58:25.971631604 +0000 UTC m=+0.083477501 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, build-date=2025-11-18T22:49:46Z) Nov 28 03:58:26 localhost podman[100406]: 2025-11-28 08:58:26.020191658 +0000 UTC m=+0.092539669 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 03:58:26 localhost podman[100406]: 2025-11-28 08:58:26.055776744 +0000 UTC m=+0.128124775 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, io.openshift.expose-services=, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:58:26 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:58:26 localhost podman[100394]: 2025-11-28 08:58:26.157897357 +0000 UTC m=+0.269743254 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 28 03:58:26 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:58:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:58:27 localhost podman[100448]: 2025-11-28 08:58:27.995429001 +0000 UTC m=+0.090536558 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=) Nov 28 03:58:28 localhost podman[100442]: 2025-11-28 08:58:28.03244926 +0000 UTC m=+0.135924024 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, distribution-scope=public, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:58:28 localhost podman[100443]: 2025-11-28 08:58:28.047631908 +0000 UTC m=+0.145896093 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:58:28 localhost podman[100442]: 2025-11-28 08:58:28.068352996 +0000 UTC m=+0.171827750 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=logrotate_crond, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, managed_by=tripleo_ansible) Nov 28 03:58:28 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:58:28 localhost podman[100448]: 2025-11-28 08:58:28.101345492 +0000 UTC m=+0.196452979 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, container_name=ceilometer_agent_compute, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 03:58:28 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:58:28 localhost podman[100451]: 2025-11-28 08:58:28.150500554 +0000 UTC m=+0.236713087 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z) Nov 28 03:58:28 localhost podman[100443]: 2025-11-28 08:58:28.174987218 +0000 UTC m=+0.273251413 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git) Nov 28 03:58:28 localhost podman[100444]: 2025-11-28 08:58:28.186961227 +0000 UTC m=+0.283164658 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1761123044, vcs-type=git) Nov 28 03:58:28 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:58:28 localhost podman[100444]: 2025-11-28 08:58:28.197042797 +0000 UTC m=+0.293246238 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible) Nov 28 03:58:28 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:58:28 localhost podman[100451]: 2025-11-28 08:58:28.52945014 +0000 UTC m=+0.615662713 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:58:28 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:58:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:58:33 localhost podman[100633]: 2025-11-28 08:58:33.976383242 +0000 UTC m=+0.075430263 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true) Nov 28 03:58:34 localhost podman[100631]: 2025-11-28 08:58:34.021055158 +0000 UTC m=+0.125611758 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4) Nov 28 03:58:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:58:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4784 writes, 637 syncs, 7.51 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:58:34 localhost podman[100631]: 2025-11-28 08:58:34.036277426 +0000 UTC m=+0.140834046 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, distribution-scope=public, container_name=ovn_controller, vcs-type=git, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team) Nov 28 03:58:34 localhost podman[100631]: unhealthy Nov 28 03:58:34 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:58:34 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:58:34 localhost podman[100633]: 2025-11-28 08:58:34.072494851 +0000 UTC m=+0.171541922 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 28 03:58:34 localhost podman[100633]: unhealthy Nov 28 03:58:34 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:58:34 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:58:34 localhost podman[100632]: 2025-11-28 08:58:34.095309873 +0000 UTC m=+0.195446497 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute) Nov 28 03:58:34 localhost podman[100632]: 2025-11-28 08:58:34.126479312 +0000 UTC m=+0.226615906 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, build-date=2025-11-19T00:36:58Z) Nov 28 03:58:34 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:58:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 03:58:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 5781 writes, 25K keys, 5781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5781 writes, 729 syncs, 7.93 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 03:58:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:58:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:58:56 localhost systemd[1]: tmp-crun.WbSk6m.mount: Deactivated successfully. Nov 28 03:58:56 localhost podman[100697]: 2025-11-28 08:58:56.983652269 +0000 UTC m=+0.093517850 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, container_name=collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Nov 28 03:58:57 localhost podman[100697]: 2025-11-28 08:58:57.018710798 +0000 UTC m=+0.128576409 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-collectd, architecture=x86_64, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-type=git) Nov 28 03:58:57 localhost podman[100696]: 2025-11-28 08:58:57.029381747 +0000 UTC m=+0.140069403 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:58:57 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:58:57 localhost podman[100696]: 2025-11-28 08:58:57.222315606 +0000 UTC m=+0.333003232 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, version=17.1.12) Nov 28 03:58:57 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:58:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:58:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:58:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:58:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:58:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:58:58 localhost podman[100747]: 2025-11-28 08:58:58.988965948 +0000 UTC m=+0.094961604 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi) Nov 28 03:58:59 localhost podman[100747]: 2025-11-28 08:58:59.04102878 +0000 UTC m=+0.147024466 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc.) Nov 28 03:58:59 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:58:59 localhost podman[100746]: 2025-11-28 08:58:59.093775604 +0000 UTC m=+0.201741751 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:58:59 localhost podman[100749]: 2025-11-28 08:58:59.045127236 +0000 UTC m=+0.144322704 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 03:58:59 localhost podman[100748]: 2025-11-28 08:58:59.149584812 +0000 UTC m=+0.248776609 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 03:58:59 localhost podman[100749]: 2025-11-28 08:58:59.227580193 +0000 UTC m=+0.326775691 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044) Nov 28 03:58:59 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:58:59 localhost podman[100758]: 2025-11-28 08:58:59.203941455 +0000 UTC m=+0.299681896 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_migration_target) Nov 28 03:58:59 localhost podman[100746]: 2025-11-28 08:58:59.280632506 +0000 UTC m=+0.388598683 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, distribution-scope=public) Nov 28 03:58:59 localhost podman[100748]: 2025-11-28 08:58:59.281129971 +0000 UTC m=+0.380321768 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.openshift.expose-services=) Nov 28 03:58:59 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:58:59 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:58:59 localhost podman[100758]: 2025-11-28 08:58:59.60987956 +0000 UTC m=+0.705619971 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute) Nov 28 03:58:59 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:59:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:59:04 localhost systemd[1]: tmp-crun.SMsxxg.mount: Deactivated successfully. Nov 28 03:59:04 localhost podman[100857]: 2025-11-28 08:59:04.991354687 +0000 UTC m=+0.096465860 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, container_name=nova_compute) Nov 28 03:59:05 localhost podman[100857]: 2025-11-28 08:59:05.017931666 +0000 UTC m=+0.123042849 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute) Nov 28 03:59:05 localhost podman[100858]: 2025-11-28 08:59:05.029563904 +0000 UTC m=+0.129339883 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:59:05 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:59:05 localhost podman[100858]: 2025-11-28 08:59:05.043768181 +0000 UTC m=+0.143544160 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Nov 28 03:59:05 localhost podman[100858]: unhealthy Nov 28 03:59:05 localhost podman[100856]: 2025-11-28 08:59:04.954518613 +0000 UTC m=+0.063005899 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc.) Nov 28 03:59:05 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:59:05 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:59:05 localhost podman[100856]: 2025-11-28 08:59:05.087469286 +0000 UTC m=+0.195956572 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ovn_controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 03:59:05 localhost podman[100856]: unhealthy Nov 28 03:59:05 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:59:05 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:59:23 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 03:59:23 localhost recover_tripleo_nova_virtqemud[100924]: 62642 Nov 28 03:59:23 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 03:59:23 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 03:59:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:59:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:59:27 localhost systemd[1]: tmp-crun.gSOhJ6.mount: Deactivated successfully. Nov 28 03:59:27 localhost podman[100925]: 2025-11-28 08:59:27.983913988 +0000 UTC m=+0.094038776 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Nov 28 03:59:28 localhost podman[100926]: 2025-11-28 08:59:28.034904768 +0000 UTC m=+0.139523587 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true) Nov 28 03:59:28 localhost podman[100926]: 2025-11-28 08:59:28.046563156 +0000 UTC m=+0.151181925 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:59:28 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:59:28 localhost podman[100925]: 2025-11-28 08:59:28.193035385 +0000 UTC m=+0.303160163 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12) Nov 28 03:59:28 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 03:59:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 03:59:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 03:59:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 03:59:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 03:59:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 03:59:29 localhost podman[100974]: 2025-11-28 08:59:29.996924804 +0000 UTC m=+0.103018711 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.) Nov 28 03:59:30 localhost podman[100974]: 2025-11-28 08:59:30.005104526 +0000 UTC m=+0.111198453 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp17/openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:59:30 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 03:59:30 localhost systemd[1]: tmp-crun.ht9chI.mount: Deactivated successfully. Nov 28 03:59:30 localhost podman[100988]: 2025-11-28 08:59:30.062223065 +0000 UTC m=+0.154206598 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:59:30 localhost podman[100981]: 2025-11-28 08:59:30.113248415 +0000 UTC m=+0.206015123 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 03:59:30 localhost podman[100975]: 2025-11-28 08:59:30.153555526 +0000 UTC m=+0.256110435 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, release=1761123044) Nov 28 03:59:30 localhost podman[100975]: 2025-11-28 08:59:30.185606483 +0000 UTC m=+0.288161402 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z) Nov 28 03:59:30 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 03:59:30 localhost podman[100976]: 2025-11-28 08:59:30.202002787 +0000 UTC m=+0.298269952 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1) Nov 28 03:59:30 localhost podman[100976]: 2025-11-28 08:59:30.235596981 +0000 UTC m=+0.331864196 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 03:59:30 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 03:59:30 localhost podman[100981]: 2025-11-28 08:59:30.256556786 +0000 UTC m=+0.349323524 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z) Nov 28 03:59:30 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 03:59:30 localhost podman[100988]: 2025-11-28 08:59:30.382888955 +0000 UTC m=+0.474872508 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20251118.1, container_name=nova_migration_target, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 03:59:30 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 03:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 03:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 03:59:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 03:59:35 localhost podman[101168]: 2025-11-28 08:59:35.979016698 +0000 UTC m=+0.084554794 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ovn_controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4) Nov 28 03:59:36 localhost podman[101168]: 2025-11-28 08:59:36.02264077 +0000 UTC m=+0.128178876 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true) Nov 28 03:59:36 localhost podman[101168]: unhealthy Nov 28 03:59:36 localhost podman[101169]: 2025-11-28 08:59:36.030788031 +0000 UTC m=+0.134630825 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, architecture=x86_64, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 03:59:36 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:59:36 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 03:59:36 localhost podman[101169]: 2025-11-28 08:59:36.08435865 +0000 UTC m=+0.188201414 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step5) Nov 28 03:59:36 localhost systemd[1]: tmp-crun.P1kJJm.mount: Deactivated successfully. Nov 28 03:59:36 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 03:59:36 localhost podman[101170]: 2025-11-28 08:59:36.089710555 +0000 UTC m=+0.190297359 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 03:59:36 localhost podman[101170]: 2025-11-28 08:59:36.169558843 +0000 UTC m=+0.270145617 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, managed_by=tripleo_ansible) Nov 28 03:59:36 localhost podman[101170]: unhealthy Nov 28 03:59:36 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 03:59:36 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 03:59:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 03:59:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 03:59:58 localhost podman[101234]: 2025-11-28 08:59:58.980118781 +0000 UTC m=+0.086709960 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}) Nov 28 03:59:59 localhost systemd[1]: tmp-crun.GeFQyq.mount: Deactivated successfully. Nov 28 03:59:59 localhost podman[101235]: 2025-11-28 08:59:59.042805051 +0000 UTC m=+0.147108430 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64) Nov 28 03:59:59 localhost podman[101235]: 2025-11-28 08:59:59.054443479 +0000 UTC m=+0.158746828 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 03:59:59 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 03:59:59 localhost podman[101234]: 2025-11-28 08:59:59.173978488 +0000 UTC m=+0.280569687 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4) Nov 28 03:59:59 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:00:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:00:00 localhost podman[101286]: 2025-11-28 09:00:00.990176006 +0000 UTC m=+0.086445623 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team) Nov 28 04:00:01 localhost podman[101284]: 2025-11-28 09:00:01.03516324 +0000 UTC m=+0.137002938 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 04:00:01 localhost podman[101286]: 2025-11-28 09:00:01.050675387 +0000 UTC m=+0.146945014 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public) Nov 28 04:00:01 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:00:01 localhost podman[101284]: 2025-11-28 09:00:01.093533047 +0000 UTC m=+0.195372735 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc.) Nov 28 04:00:01 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 04:00:01 localhost podman[101285]: 2025-11-28 09:00:01.138389188 +0000 UTC m=+0.238591245 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, vendor=Red Hat, Inc., container_name=iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 04:00:01 localhost podman[101285]: 2025-11-28 09:00:01.152340328 +0000 UTC m=+0.252542405 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:00:01 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:00:01 localhost podman[101292]: 2025-11-28 09:00:01.247103705 +0000 UTC m=+0.339729950 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:00:01 localhost podman[101283]: 2025-11-28 09:00:01.278083468 +0000 UTC m=+0.383482505 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 28 04:00:01 localhost podman[101283]: 2025-11-28 09:00:01.285701202 +0000 UTC m=+0.391100209 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git) Nov 28 04:00:01 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:00:01 localhost podman[101292]: 2025-11-28 09:00:01.654484385 +0000 UTC m=+0.747110630 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:00:01 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:00:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:00:06 localhost podman[101403]: 2025-11-28 09:00:06.975212981 +0000 UTC m=+0.081303143 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:00:06 localhost systemd[1]: tmp-crun.tYRBLl.mount: Deactivated successfully. Nov 28 04:00:06 localhost podman[101403]: 2025-11-28 09:00:06.997368533 +0000 UTC m=+0.103458735 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:00:07 localhost podman[101403]: unhealthy Nov 28 04:00:07 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:00:07 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:00:07 localhost podman[101410]: 2025-11-28 09:00:06.995655581 +0000 UTC m=+0.087330830 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, config_id=tripleo_step4, version=17.1.12, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:00:07 localhost podman[101410]: 2025-11-28 09:00:07.082839575 +0000 UTC m=+0.174514814 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4) Nov 28 04:00:07 localhost podman[101410]: unhealthy Nov 28 04:00:07 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:00:07 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:00:07 localhost podman[101404]: 2025-11-28 09:00:07.139390316 +0000 UTC m=+0.238199894 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:00:07 localhost podman[101404]: 2025-11-28 09:00:07.169871524 +0000 UTC m=+0.268681052 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:00:07 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 04:00:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:00:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:00:29 localhost podman[101469]: 2025-11-28 09:00:29.978550064 +0000 UTC m=+0.084693359 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:00:30 localhost podman[101470]: 2025-11-28 09:00:30.031116781 +0000 UTC m=+0.131325212 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:00:30 localhost podman[101470]: 2025-11-28 09:00:30.069576075 +0000 UTC m=+0.169784526 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:00:30 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:00:30 localhost podman[101469]: 2025-11-28 09:00:30.172857955 +0000 UTC m=+0.279001300 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com) Nov 28 04:00:30 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:00:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:00:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:00:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:00:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:00:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:00:31 localhost podman[101519]: 2025-11-28 09:00:31.979238671 +0000 UTC m=+0.085909076 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, container_name=logrotate_crond, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team) Nov 28 04:00:32 localhost systemd[1]: tmp-crun.Ww5AVT.mount: Deactivated successfully. Nov 28 04:00:32 localhost podman[101521]: 2025-11-28 09:00:32.038220136 +0000 UTC m=+0.140710982 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 04:00:32 localhost podman[101521]: 2025-11-28 09:00:32.050463983 +0000 UTC m=+0.152954899 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, config_id=tripleo_step3, release=1761123044, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12) Nov 28 04:00:32 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:00:32 localhost podman[101520]: 2025-11-28 09:00:32.090234357 +0000 UTC m=+0.196787478 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi) Nov 28 04:00:32 localhost podman[101523]: 2025-11-28 09:00:32.148553662 +0000 UTC m=+0.246114487 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044) Nov 28 04:00:32 localhost podman[101519]: 2025-11-28 09:00:32.167101954 +0000 UTC m=+0.273772359 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:00:32 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:00:32 localhost podman[101522]: 2025-11-28 09:00:31.965552149 +0000 UTC m=+0.070356736 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc.) Nov 28 04:00:32 localhost podman[101522]: 2025-11-28 09:00:32.251513412 +0000 UTC m=+0.356317959 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 04:00:32 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:00:32 localhost podman[101520]: 2025-11-28 09:00:32.26868196 +0000 UTC m=+0.375235071 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public) Nov 28 04:00:32 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 04:00:32 localhost podman[101523]: 2025-11-28 09:00:32.513283119 +0000 UTC m=+0.610843884 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 04:00:32 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:00:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:00:37 localhost podman[101758]: 2025-11-28 09:00:37.369092494 +0000 UTC m=+0.088358901 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller) Nov 28 04:00:37 localhost podman[101760]: 2025-11-28 09:00:37.419686202 +0000 UTC m=+0.132551321 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public) Nov 28 04:00:37 localhost podman[101758]: 2025-11-28 09:00:37.440248124 +0000 UTC m=+0.159514561 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4) Nov 28 04:00:37 localhost podman[101758]: unhealthy Nov 28 04:00:37 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:00:37 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:00:37 localhost podman[101760]: 2025-11-28 09:00:37.491398129 +0000 UTC m=+0.204263168 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public) Nov 28 04:00:37 localhost podman[101760]: unhealthy Nov 28 04:00:37 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:00:37 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:00:37 localhost podman[101759]: 2025-11-28 09:00:37.581478212 +0000 UTC m=+0.295885669 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:00:37 localhost podman[101759]: 2025-11-28 09:00:37.613504438 +0000 UTC m=+0.327911885 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:00:37 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 04:01:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:01:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:01:01 localhost podman[101826]: 2025-11-28 09:01:01.006862944 +0000 UTC m=+0.112344539 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, version=17.1.12) Nov 28 04:01:01 localhost podman[101826]: 2025-11-28 09:01:01.01452311 +0000 UTC m=+0.120004705 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 28 04:01:01 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:01:01 localhost podman[101825]: 2025-11-28 09:01:01.064793317 +0000 UTC m=+0.172775709 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 28 04:01:01 localhost podman[101825]: 2025-11-28 09:01:01.26140946 +0000 UTC m=+0.369391842 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 04:01:01 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:01:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:01:02 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 04:01:03 localhost recover_tripleo_nova_virtqemud[101931]: 62642 Nov 28 04:01:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 04:01:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 04:01:03 localhost podman[101899]: 2025-11-28 09:01:03.068342602 +0000 UTC m=+0.081920183 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:01:03 localhost podman[101907]: 2025-11-28 09:01:03.13876907 +0000 UTC m=+0.145960284 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:01:03 localhost podman[101901]: 2025-11-28 09:01:03.116237886 +0000 UTC m=+0.128208987 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4) Nov 28 04:01:03 localhost podman[101900]: 2025-11-28 09:01:03.184862258 +0000 UTC m=+0.197948934 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container) Nov 28 04:01:03 localhost podman[101899]: 2025-11-28 09:01:03.203467961 +0000 UTC m=+0.217045592 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi) Nov 28 04:01:03 localhost podman[101900]: 2025-11-28 09:01:03.219951369 +0000 UTC m=+0.233038095 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1) Nov 28 04:01:03 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:01:03 localhost podman[101901]: 2025-11-28 09:01:03.251122668 +0000 UTC m=+0.263093769 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute) Nov 28 04:01:03 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:01:03 localhost podman[101898]: 2025-11-28 09:01:03.225955484 +0000 UTC m=+0.245147398 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 28 04:01:03 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 04:01:03 localhost podman[101898]: 2025-11-28 09:01:03.359811074 +0000 UTC m=+0.379002988 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible) Nov 28 04:01:03 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:01:03 localhost podman[101907]: 2025-11-28 09:01:03.50654032 +0000 UTC m=+0.513731514 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12) Nov 28 04:01:03 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:01:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:01:07 localhost systemd[1]: tmp-crun.BEqCoO.mount: Deactivated successfully. Nov 28 04:01:07 localhost podman[102015]: 2025-11-28 09:01:07.988650121 +0000 UTC m=+0.087300548 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 28 04:01:08 localhost podman[102015]: 2025-11-28 09:01:08.028360553 +0000 UTC m=+0.127010890 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4) Nov 28 04:01:08 localhost podman[102015]: unhealthy Nov 28 04:01:08 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:01:08 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:01:08 localhost podman[102013]: 2025-11-28 09:01:08.038377312 +0000 UTC m=+0.136847854 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1) Nov 28 04:01:08 localhost podman[102014]: 2025-11-28 09:01:08.103088904 +0000 UTC m=+0.200983988 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1) Nov 28 04:01:08 localhost podman[102013]: 2025-11-28 09:01:08.124541983 +0000 UTC m=+0.223012515 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 04:01:08 localhost podman[102013]: unhealthy Nov 28 04:01:08 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:01:08 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:01:08 localhost podman[102014]: 2025-11-28 09:01:08.181993583 +0000 UTC m=+0.279888727 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, container_name=nova_compute) Nov 28 04:01:08 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 04:01:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:01:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:01:31 localhost systemd[1]: tmp-crun.RI0WfF.mount: Deactivated successfully. Nov 28 04:01:31 localhost podman[102083]: 2025-11-28 09:01:31.99245234 +0000 UTC m=+0.091791695 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 04:01:32 localhost podman[102082]: 2025-11-28 09:01:32.037141826 +0000 UTC m=+0.140417282 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:01:32 localhost podman[102083]: 2025-11-28 09:01:32.052417157 +0000 UTC m=+0.151756432 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, container_name=collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, architecture=x86_64) Nov 28 04:01:32 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:01:32 localhost podman[102082]: 2025-11-28 09:01:32.267552909 +0000 UTC m=+0.370828335 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, architecture=x86_64, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:01:32 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:01:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:01:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:01:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:01:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:01:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:01:33 localhost podman[102146]: 2025-11-28 09:01:33.984452049 +0000 UTC m=+0.071331226 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:01:34 localhost podman[102139]: 2025-11-28 09:01:34.016315431 +0000 UTC m=+0.105492309 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z) Nov 28 04:01:34 localhost podman[102135]: 2025-11-28 09:01:34.046216361 +0000 UTC m=+0.137524825 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, version=17.1.12, build-date=2025-11-18T23:44:13Z, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 04:01:34 localhost podman[102135]: 2025-11-28 09:01:34.091892077 +0000 UTC m=+0.183200571 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, distribution-scope=public, release=1761123044, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible) Nov 28 04:01:34 localhost podman[102134]: 2025-11-28 09:01:34.099289574 +0000 UTC m=+0.198777649 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4) Nov 28 04:01:34 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:01:34 localhost podman[102134]: 2025-11-28 09:01:34.134631943 +0000 UTC m=+0.234120068 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, build-date=2025-11-19T00:12:45Z, vcs-type=git, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 04:01:34 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 04:01:34 localhost podman[102139]: 2025-11-28 09:01:34.170409244 +0000 UTC m=+0.259586122 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, tcib_managed=true) Nov 28 04:01:34 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:01:34 localhost podman[102133]: 2025-11-28 09:01:34.143239357 +0000 UTC m=+0.244717853 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:01:34 localhost podman[102133]: 2025-11-28 09:01:34.226394578 +0000 UTC m=+0.327873024 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:49:32Z) Nov 28 04:01:34 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:01:34 localhost podman[102146]: 2025-11-28 09:01:34.293650928 +0000 UTC m=+0.380530185 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container) Nov 28 04:01:34 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:01:34 localhost systemd[1]: tmp-crun.1mRdfs.mount: Deactivated successfully. Nov 28 04:01:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:01:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:01:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:01:38 localhost podman[102325]: 2025-11-28 09:01:38.988369613 +0000 UTC m=+0.077234759 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 04:01:39 localhost podman[102325]: 2025-11-28 09:01:39.036541466 +0000 UTC m=+0.125406622 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, release=1761123044, batch=17.1_20251118.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, tcib_managed=true) Nov 28 04:01:39 localhost systemd[1]: tmp-crun.Ggho6R.mount: Deactivated successfully. Nov 28 04:01:39 localhost podman[102325]: unhealthy Nov 28 04:01:39 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:01:39 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:01:39 localhost podman[102324]: 2025-11-28 09:01:39.050781404 +0000 UTC m=+0.142811477 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1761123044) Nov 28 04:01:39 localhost podman[102323]: 2025-11-28 09:01:39.090994652 +0000 UTC m=+0.183406036 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=) Nov 28 04:01:39 localhost podman[102324]: 2025-11-28 09:01:39.135304086 +0000 UTC m=+0.227334189 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:01:39 localhost podman[102323]: 2025-11-28 09:01:39.131114967 +0000 UTC m=+0.223526421 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, url=https://www.redhat.com) Nov 28 04:01:39 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 04:01:39 localhost podman[102323]: unhealthy Nov 28 04:01:39 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:01:39 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:02:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:02:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:02:02 localhost podman[102390]: 2025-11-28 09:02:02.987787278 +0000 UTC m=+0.088790735 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, release=1761123044, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4) Nov 28 04:02:03 localhost podman[102391]: 2025-11-28 09:02:03.044517474 +0000 UTC m=+0.140479415 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd) Nov 28 04:02:03 localhost podman[102391]: 2025-11-28 09:02:03.053333195 +0000 UTC m=+0.149295106 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, version=17.1.12, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, container_name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public) Nov 28 04:02:03 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:02:03 localhost podman[102390]: 2025-11-28 09:02:03.175996941 +0000 UTC m=+0.277000378 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, distribution-scope=public) Nov 28 04:02:03 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:02:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 04:02:03 localhost recover_tripleo_nova_virtqemud[102438]: 62642 Nov 28 04:02:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 04:02:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 04:02:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:02:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:02:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:02:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:02:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:02:04 localhost podman[102441]: 2025-11-28 09:02:04.996588124 +0000 UTC m=+0.092046475 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:02:05 localhost podman[102441]: 2025-11-28 09:02:05.006685774 +0000 UTC m=+0.102144125 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc.) Nov 28 04:02:05 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:02:05 localhost systemd[1]: tmp-crun.ePap8R.mount: Deactivated successfully. Nov 28 04:02:05 localhost podman[102451]: 2025-11-28 09:02:05.089437742 +0000 UTC m=+0.181012093 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, container_name=nova_migration_target, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z) Nov 28 04:02:05 localhost podman[102442]: 2025-11-28 09:02:05.065692471 +0000 UTC m=+0.158169440 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 04:02:05 localhost podman[102439]: 2025-11-28 09:02:05.137019177 +0000 UTC m=+0.241298369 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:02:05 localhost podman[102439]: 2025-11-28 09:02:05.145481217 +0000 UTC m=+0.249760429 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=logrotate_crond) Nov 28 04:02:05 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:02:05 localhost podman[102440]: 2025-11-28 09:02:05.192536966 +0000 UTC m=+0.294379393 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1761123044) Nov 28 04:02:05 localhost podman[102442]: 2025-11-28 09:02:05.202052808 +0000 UTC m=+0.294529737 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com) Nov 28 04:02:05 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:02:05 localhost podman[102440]: 2025-11-28 09:02:05.245324721 +0000 UTC m=+0.347167198 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z) Nov 28 04:02:05 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Deactivated successfully. Nov 28 04:02:05 localhost podman[102451]: 2025-11-28 09:02:05.450684232 +0000 UTC m=+0.542258603 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=) Nov 28 04:02:05 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:02:09 localhost sshd[102552]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:02:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:02:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:02:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:02:09 localhost podman[102554]: 2025-11-28 09:02:09.978442448 +0000 UTC m=+0.085084890 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller) Nov 28 04:02:09 localhost podman[102554]: 2025-11-28 09:02:09.997439933 +0000 UTC m=+0.104082365 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:02:10 localhost podman[102554]: unhealthy Nov 28 04:02:10 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:02:10 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:02:10 localhost podman[102555]: 2025-11-28 09:02:10.049479694 +0000 UTC m=+0.150576685 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, container_name=nova_compute, version=17.1.12, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 04:02:10 localhost systemd[1]: tmp-crun.JFBaaU.mount: Deactivated successfully. Nov 28 04:02:10 localhost podman[102556]: 2025-11-28 09:02:10.093507829 +0000 UTC m=+0.193359553 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git) Nov 28 04:02:10 localhost podman[102556]: 2025-11-28 09:02:10.10749484 +0000 UTC m=+0.207346614 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:02:10 localhost podman[102556]: unhealthy Nov 28 04:02:10 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:02:10 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:02:10 localhost podman[102555]: 2025-11-28 09:02:10.16076832 +0000 UTC m=+0.261865251 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.) Nov 28 04:02:10 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 04:02:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:02:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:02:33 localhost podman[102617]: 2025-11-28 09:02:33.98536317 +0000 UTC m=+0.092377525 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd) Nov 28 04:02:34 localhost podman[102618]: 2025-11-28 09:02:34.02923429 +0000 UTC m=+0.134937135 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, release=1761123044, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:02:34 localhost podman[102618]: 2025-11-28 09:02:34.067511848 +0000 UTC m=+0.173214673 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public) Nov 28 04:02:34 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:02:34 localhost podman[102617]: 2025-11-28 09:02:34.170500079 +0000 UTC m=+0.277514434 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Nov 28 04:02:34 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:02:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:02:35 localhost systemd[1]: tmp-crun.dh57NW.mount: Deactivated successfully. Nov 28 04:02:35 localhost podman[102666]: 2025-11-28 09:02:35.996041363 +0000 UTC m=+0.099930396 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com) Nov 28 04:02:36 localhost podman[102666]: 2025-11-28 09:02:36.032351701 +0000 UTC m=+0.136240684 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z) Nov 28 04:02:36 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:02:36 localhost podman[102667]: 2025-11-28 09:02:36.035759367 +0000 UTC m=+0.136897715 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public) Nov 28 04:02:36 localhost podman[102669]: 2025-11-28 09:02:36.091199753 +0000 UTC m=+0.186913534 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute) Nov 28 04:02:36 localhost podman[102668]: 2025-11-28 09:02:36.149232189 +0000 UTC m=+0.248380957 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, release=1761123044, container_name=iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 28 04:02:36 localhost podman[102668]: 2025-11-28 09:02:36.156776742 +0000 UTC m=+0.255925510 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}) Nov 28 04:02:36 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:02:36 localhost podman[102669]: 2025-11-28 09:02:36.16840496 +0000 UTC m=+0.264118731 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4) Nov 28 04:02:36 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:02:36 localhost podman[102674]: 2025-11-28 09:02:36.247558336 +0000 UTC m=+0.340216483 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:02:36 localhost podman[102667]: 2025-11-28 09:02:36.269375228 +0000 UTC m=+0.370513566 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi) Nov 28 04:02:36 localhost podman[102667]: unhealthy Nov 28 04:02:36 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:02:36 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:02:36 localhost podman[102674]: 2025-11-28 09:02:36.604405671 +0000 UTC m=+0.697063868 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4) Nov 28 04:02:36 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:02:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:02:40 localhost podman[102885]: 2025-11-28 09:02:40.064292985 +0000 UTC m=+0.100628148 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Nov 28 04:02:40 localhost podman[102885]: 2025-11-28 09:02:40.162461767 +0000 UTC m=+0.198796970 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, GIT_BRANCH=main) Nov 28 04:02:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:02:40 localhost systemd[1]: tmp-crun.xJSeJv.mount: Deactivated successfully. Nov 28 04:02:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:02:40 localhost podman[102903]: 2025-11-28 09:02:40.163751446 +0000 UTC m=+0.098147962 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:02:40 localhost podman[102903]: 2025-11-28 09:02:40.255353207 +0000 UTC m=+0.189749673 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_controller, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container) Nov 28 04:02:40 localhost podman[102903]: unhealthy Nov 28 04:02:40 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:02:40 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:02:40 localhost podman[102923]: 2025-11-28 09:02:40.259512624 +0000 UTC m=+0.085343638 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 04:02:40 localhost podman[102923]: 2025-11-28 09:02:40.342729346 +0000 UTC m=+0.168560360 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:02:40 localhost podman[102923]: unhealthy Nov 28 04:02:40 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:02:40 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:02:40 localhost podman[102949]: 2025-11-28 09:02:40.393080726 +0000 UTC m=+0.179253269 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible) Nov 28 04:02:40 localhost podman[102949]: 2025-11-28 09:02:40.500912196 +0000 UTC m=+0.287084769 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, url=https://www.redhat.com, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:02:40 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Deactivated successfully. Nov 28 04:03:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:03:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:03:04 localhost podman[103093]: 2025-11-28 09:03:04.99586564 +0000 UTC m=+0.091142956 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1761123044, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.) Nov 28 04:03:05 localhost systemd[1]: tmp-crun.yA6SyH.mount: Deactivated successfully. Nov 28 04:03:05 localhost podman[103094]: 2025-11-28 09:03:05.0569192 +0000 UTC m=+0.152324530 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12) Nov 28 04:03:05 localhost podman[103094]: 2025-11-28 09:03:05.072366596 +0000 UTC m=+0.167771906 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:03:05 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:03:05 localhost podman[103093]: 2025-11-28 09:03:05.210743115 +0000 UTC m=+0.306020461 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=) Nov 28 04:03:05 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:03:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:03:06 localhost podman[103142]: 2025-11-28 09:03:06.975824909 +0000 UTC m=+0.079220969 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:03:06 localhost podman[103142]: 2025-11-28 09:03:06.987479478 +0000 UTC m=+0.090875578 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 28 04:03:07 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:03:07 localhost systemd[1]: tmp-crun.eGsJU0.mount: Deactivated successfully. Nov 28 04:03:07 localhost podman[103157]: 2025-11-28 09:03:07.045214216 +0000 UTC m=+0.135241494 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:03:07 localhost podman[103143]: 2025-11-28 09:03:07.086574229 +0000 UTC m=+0.189569607 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:03:07 localhost podman[103145]: 2025-11-28 09:03:07.159619178 +0000 UTC m=+0.253283109 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:03:07 localhost podman[103143]: 2025-11-28 09:03:07.174550817 +0000 UTC m=+0.277546225 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team) Nov 28 04:03:07 localhost podman[103143]: unhealthy Nov 28 04:03:07 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:07 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:03:07 localhost podman[103145]: 2025-11-28 09:03:07.199502955 +0000 UTC m=+0.293166926 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute) Nov 28 04:03:07 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:03:07 localhost podman[103144]: 2025-11-28 09:03:07.248459852 +0000 UTC m=+0.346465406 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 04:03:07 localhost podman[103144]: 2025-11-28 09:03:07.262421491 +0000 UTC m=+0.360427015 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 04:03:07 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:03:07 localhost podman[103157]: 2025-11-28 09:03:07.406380113 +0000 UTC m=+0.496407451 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute) Nov 28 04:03:07 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:03:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:03:10 localhost podman[103252]: 2025-11-28 09:03:10.988510251 +0000 UTC m=+0.098770442 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public) Nov 28 04:03:11 localhost podman[103252]: 2025-11-28 09:03:11.023146106 +0000 UTC m=+0.133406357 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:03:11 localhost podman[103252]: unhealthy Nov 28 04:03:11 localhost podman[103254]: 2025-11-28 09:03:11.036684173 +0000 UTC m=+0.138495084 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=) Nov 28 04:03:11 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:11 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:03:11 localhost podman[103254]: 2025-11-28 09:03:11.050594592 +0000 UTC m=+0.152405503 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 28 04:03:11 localhost podman[103254]: unhealthy Nov 28 04:03:11 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:11 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:03:11 localhost podman[103253]: 2025-11-28 09:03:11.106879194 +0000 UTC m=+0.212922315 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 04:03:11 localhost podman[103253]: 2025-11-28 09:03:11.123900928 +0000 UTC m=+0.229944069 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64) Nov 28 04:03:11 localhost podman[103253]: unhealthy Nov 28 04:03:11 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:11 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:03:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29737 DF PROTO=TCP SPT=37178 DPT=9100 SEQ=4062564550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB739710000000001030307) Nov 28 04:03:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59278 DF PROTO=TCP SPT=49178 DPT=9101 SEQ=3146939845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB73C400000000001030307) Nov 28 04:03:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29738 DF PROTO=TCP SPT=37178 DPT=9100 SEQ=4062564550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB73D7A0000000001030307) Nov 28 04:03:15 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59279 DF PROTO=TCP SPT=49178 DPT=9101 SEQ=3146939845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7403B0000000001030307) Nov 28 04:03:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29739 DF PROTO=TCP SPT=37178 DPT=9100 SEQ=4062564550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7457A0000000001030307) Nov 28 04:03:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59280 DF PROTO=TCP SPT=49178 DPT=9101 SEQ=3146939845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7483A0000000001030307) Nov 28 04:03:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29740 DF PROTO=TCP SPT=37178 DPT=9100 SEQ=4062564550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7553A0000000001030307) Nov 28 04:03:21 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59281 DF PROTO=TCP SPT=49178 DPT=9101 SEQ=3146939845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB757FA0000000001030307) Nov 28 04:03:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48822 DF PROTO=TCP SPT=55802 DPT=9105 SEQ=3116172900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB770720000000001030307) Nov 28 04:03:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48823 DF PROTO=TCP SPT=55802 DPT=9105 SEQ=3116172900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7747B0000000001030307) Nov 28 04:03:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29741 DF PROTO=TCP SPT=37178 DPT=9100 SEQ=4062564550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB774FB0000000001030307) Nov 28 04:03:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55894 DF PROTO=TCP SPT=33272 DPT=9882 SEQ=482753259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7759C0000000001030307) Nov 28 04:03:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59282 DF PROTO=TCP SPT=49178 DPT=9101 SEQ=3146939845 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB778FA0000000001030307) Nov 28 04:03:29 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55895 DF PROTO=TCP SPT=33272 DPT=9882 SEQ=482753259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB779BA0000000001030307) Nov 28 04:03:30 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48824 DF PROTO=TCP SPT=55802 DPT=9105 SEQ=3116172900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB77C7A0000000001030307) Nov 28 04:03:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55896 DF PROTO=TCP SPT=33272 DPT=9882 SEQ=482753259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB781BB0000000001030307) Nov 28 04:03:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17877 DF PROTO=TCP SPT=40392 DPT=9102 SEQ=1719917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB781E80000000001030307) Nov 28 04:03:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17878 DF PROTO=TCP SPT=40392 DPT=9102 SEQ=1719917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB785FA0000000001030307) Nov 28 04:03:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48825 DF PROTO=TCP SPT=55802 DPT=9105 SEQ=3116172900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB78C3A0000000001030307) Nov 28 04:03:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17879 DF PROTO=TCP SPT=40392 DPT=9102 SEQ=1719917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB78DFA0000000001030307) Nov 28 04:03:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:03:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:03:35 localhost podman[103314]: 2025-11-28 09:03:35.973651234 +0000 UTC m=+0.083629844 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044) Nov 28 04:03:36 localhost systemd[1]: tmp-crun.H4wZZE.mount: Deactivated successfully. Nov 28 04:03:36 localhost podman[103315]: 2025-11-28 09:03:36.032422564 +0000 UTC m=+0.140376933 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com) Nov 28 04:03:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55897 DF PROTO=TCP SPT=33272 DPT=9882 SEQ=482753259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7917A0000000001030307) Nov 28 04:03:36 localhost podman[103315]: 2025-11-28 09:03:36.042164974 +0000 UTC m=+0.150119303 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:03:36 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:03:36 localhost podman[103314]: 2025-11-28 09:03:36.170612428 +0000 UTC m=+0.280591068 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, container_name=metrics_qdr, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:03:36 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:03:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:03:37 localhost podman[103367]: 2025-11-28 09:03:37.993748578 +0000 UTC m=+0.091138146 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git) Nov 28 04:03:38 localhost podman[103367]: 2025-11-28 09:03:38.0214056 +0000 UTC m=+0.118795198 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z) Nov 28 04:03:38 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:03:38 localhost podman[103370]: 2025-11-28 09:03:38.091458036 +0000 UTC m=+0.184512611 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044) Nov 28 04:03:38 localhost podman[103364]: 2025-11-28 09:03:38.102167916 +0000 UTC m=+0.207606152 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_id=tripleo_step4, vcs-type=git) Nov 28 04:03:38 localhost podman[103364]: 2025-11-28 09:03:38.110783491 +0000 UTC m=+0.216221677 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, managed_by=tripleo_ansible) Nov 28 04:03:38 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:03:38 localhost podman[103365]: 2025-11-28 09:03:38.202498145 +0000 UTC m=+0.304436163 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, release=1761123044) Nov 28 04:03:38 localhost podman[103365]: 2025-11-28 09:03:38.257297942 +0000 UTC m=+0.359236000 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4) Nov 28 04:03:38 localhost podman[103365]: unhealthy Nov 28 04:03:38 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:38 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:03:38 localhost podman[103366]: 2025-11-28 09:03:38.258046065 +0000 UTC m=+0.356482515 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, version=17.1.12, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:03:38 localhost podman[103366]: 2025-11-28 09:03:38.337999395 +0000 UTC m=+0.436435835 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1) Nov 28 04:03:38 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:03:38 localhost podman[103370]: 2025-11-28 09:03:38.461674832 +0000 UTC m=+0.554729407 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:03:38 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:03:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17880 DF PROTO=TCP SPT=40392 DPT=9102 SEQ=1719917992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB79DBB0000000001030307) Nov 28 04:03:39 localhost sshd[103472]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:03:40 localhost systemd-logind[763]: New session 35 of user zuul. Nov 28 04:03:40 localhost systemd[1]: Started Session 35 of User zuul. Nov 28 04:03:40 localhost python3.9[103567]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:03:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:03:41 localhost podman[103662]: 2025-11-28 09:03:41.618058084 +0000 UTC m=+0.083223712 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git) Nov 28 04:03:41 localhost podman[103662]: 2025-11-28 09:03:41.63644916 +0000 UTC m=+0.101614788 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=) Nov 28 04:03:41 localhost podman[103663]: 2025-11-28 09:03:41.666681801 +0000 UTC m=+0.128120965 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container) Nov 28 04:03:41 localhost podman[103662]: unhealthy Nov 28 04:03:41 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:41 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:03:41 localhost podman[103663]: 2025-11-28 09:03:41.714453742 +0000 UTC m=+0.175892906 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:03:41 localhost python3.9[103661]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:03:41 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:41 localhost podman[103663]: unhealthy Nov 28 04:03:41 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:03:41 localhost podman[103664]: 2025-11-28 09:03:41.779862315 +0000 UTC m=+0.239822164 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:03:41 localhost podman[103664]: 2025-11-28 09:03:41.798422937 +0000 UTC m=+0.258382706 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 04:03:41 localhost podman[103664]: unhealthy Nov 28 04:03:41 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:03:41 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:03:42 localhost python3.9[103846]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:03:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48826 DF PROTO=TCP SPT=55802 DPT=9105 SEQ=3116172900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7ACFA0000000001030307) Nov 28 04:03:43 localhost python3.9[103972]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:03:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=106 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=1663755763 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7AEA20000000001030307) Nov 28 04:03:43 localhost python3.9[104080]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:03:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55898 DF PROTO=TCP SPT=33272 DPT=9882 SEQ=482753259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7B0FA0000000001030307) Nov 28 04:03:44 localhost python3.9[104171]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Nov 28 04:03:46 localhost python3.9[104261]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:03:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=108 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=1663755763 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7BABB0000000001030307) Nov 28 04:03:46 localhost python3.9[104353]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Nov 28 04:03:48 localhost python3.9[104443]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:03:48 localhost python3.9[104491]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:03:49 localhost systemd[1]: session-35.scope: Deactivated successfully. Nov 28 04:03:49 localhost systemd[1]: session-35.scope: Consumed 4.989s CPU time. Nov 28 04:03:49 localhost systemd-logind[763]: Session 35 logged out. Waiting for processes to exit. Nov 28 04:03:49 localhost systemd-logind[763]: Removed session 35. Nov 28 04:03:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=109 DF PROTO=TCP SPT=37044 DPT=9100 SEQ=1663755763 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7CA7B0000000001030307) Nov 28 04:03:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34651 DF PROTO=TCP SPT=35246 DPT=9105 SEQ=2060818330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7E5A30000000001030307) Nov 28 04:03:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34652 DF PROTO=TCP SPT=35246 DPT=9105 SEQ=2060818330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7E9BA0000000001030307) Nov 28 04:03:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17769 DF PROTO=TCP SPT=46804 DPT=9882 SEQ=2702682633 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7EACC0000000001030307) Nov 28 04:04:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17771 DF PROTO=TCP SPT=46804 DPT=9882 SEQ=2702682633 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB7F6BA0000000001030307) Nov 28 04:04:03 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 04:04:03 localhost recover_tripleo_nova_virtqemud[104508]: 62642 Nov 28 04:04:03 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 04:04:03 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 04:04:04 localhost sshd[104509]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:04:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34654 DF PROTO=TCP SPT=35246 DPT=9105 SEQ=2060818330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8017A0000000001030307) Nov 28 04:04:04 localhost systemd-logind[763]: New session 36 of user zuul. Nov 28 04:04:04 localhost systemd[1]: Started Session 36 of User zuul. Nov 28 04:04:05 localhost python3.9[104604]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:04:05 localhost systemd[1]: Reloading. Nov 28 04:04:06 localhost systemd-sysv-generator[104634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:04:06 localhost systemd-rc-local-generator[104629]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:04:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:04:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:04:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:04:06 localhost podman[104642]: 2025-11-28 09:04:06.372031876 +0000 UTC m=+0.089387333 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:04:06 localhost podman[104642]: 2025-11-28 09:04:06.41345616 +0000 UTC m=+0.130811657 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:04:06 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:04:06 localhost podman[104641]: 2025-11-28 09:04:06.434437537 +0000 UTC m=+0.152273809 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1761123044, tcib_managed=true) Nov 28 04:04:06 localhost podman[104641]: 2025-11-28 09:04:06.67040034 +0000 UTC m=+0.388236612 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:04:06 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:04:07 localhost python3.9[104778]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:04:07 localhost network[104795]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:04:07 localhost network[104796]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:04:07 localhost network[104797]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:04:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:04:09 localhost podman[104814]: 2025-11-28 09:04:09.01039255 +0000 UTC m=+0.094847340 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_migration_target, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 04:04:09 localhost podman[104810]: 2025-11-28 09:04:09.063455594 +0000 UTC m=+0.156824288 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4) Nov 28 04:04:09 localhost podman[104810]: 2025-11-28 09:04:09.097807262 +0000 UTC m=+0.191175956 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vendor=Red Hat, Inc., container_name=logrotate_crond, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team) Nov 28 04:04:09 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:04:09 localhost podman[104812]: 2025-11-28 09:04:09.178253348 +0000 UTC m=+0.270225519 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-type=git, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid) Nov 28 04:04:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60307 DF PROTO=TCP SPT=51290 DPT=9102 SEQ=1197907661 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB812FA0000000001030307) Nov 28 04:04:09 localhost podman[104812]: 2025-11-28 09:04:09.217616199 +0000 UTC m=+0.309588350 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid) Nov 28 04:04:09 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:04:09 localhost podman[104813]: 2025-11-28 09:04:09.237040177 +0000 UTC m=+0.320215307 container health_status d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:04:09 localhost podman[104813]: 2025-11-28 09:04:09.270773516 +0000 UTC m=+0.353948636 container exec_died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 04:04:09 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Deactivated successfully. Nov 28 04:04:09 localhost podman[104811]: 2025-11-28 09:04:09.276364918 +0000 UTC m=+0.369909908 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:04:09 localhost podman[104811]: 2025-11-28 09:04:09.359712594 +0000 UTC m=+0.453257584 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com) Nov 28 04:04:09 localhost podman[104811]: unhealthy Nov 28 04:04:09 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:09 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:04:09 localhost podman[104814]: 2025-11-28 09:04:09.381356 +0000 UTC m=+0.465810780 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12) Nov 28 04:04:09 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:04:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:04:11 localhost python3.9[105108]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:04:11 localhost network[105125]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:04:11 localhost network[105126]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:04:11 localhost network[105127]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:04:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:04:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:04:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:04:12 localhost podman[105133]: 2025-11-28 09:04:12.000616858 +0000 UTC m=+0.098023429 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}) Nov 28 04:04:12 localhost podman[105134]: 2025-11-28 09:04:12.108852519 +0000 UTC m=+0.202729971 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Nov 28 04:04:12 localhost podman[105132]: 2025-11-28 09:04:12.070846319 +0000 UTC m=+0.168740535 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc.) Nov 28 04:04:12 localhost podman[105132]: 2025-11-28 09:04:12.15112249 +0000 UTC m=+0.249016736 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, distribution-scope=public) Nov 28 04:04:12 localhost podman[105132]: unhealthy Nov 28 04:04:12 localhost podman[105134]: 2025-11-28 09:04:12.174243842 +0000 UTC m=+0.268121234 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, io.openshift.expose-services=) Nov 28 04:04:12 localhost podman[105134]: unhealthy Nov 28 04:04:12 localhost podman[105133]: 2025-11-28 09:04:12.226010565 +0000 UTC m=+0.323417166 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container) Nov 28 04:04:12 localhost podman[105133]: unhealthy Nov 28 04:04:12 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:12 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:04:12 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:12 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:04:12 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:12 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:04:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34655 DF PROTO=TCP SPT=35246 DPT=9105 SEQ=2060818330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB820FA0000000001030307) Nov 28 04:04:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:04:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31820 DF PROTO=TCP SPT=38202 DPT=9101 SEQ=3180037704 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB826A00000000001030307) Nov 28 04:04:15 localhost python3.9[105391]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:04:15 localhost systemd[1]: Reloading. Nov 28 04:04:15 localhost systemd-sysv-generator[105423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:04:15 localhost systemd-rc-local-generator[105417]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:04:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:04:15 localhost systemd[1]: Stopping ceilometer_agent_compute container... Nov 28 04:04:16 localhost systemd[1]: tmp-crun.xYoGUS.mount: Deactivated successfully. Nov 28 04:04:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8058 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=491117686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB82FBA0000000001030307) Nov 28 04:04:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8059 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=491117686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB83F7A0000000001030307) Nov 28 04:04:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9562 DF PROTO=TCP SPT=60390 DPT=9105 SEQ=1153921280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB85AD30000000001030307) Nov 28 04:04:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8060 DF PROTO=TCP SPT=60568 DPT=9100 SEQ=491117686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB85EFA0000000001030307) Nov 28 04:04:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9563 DF PROTO=TCP SPT=60390 DPT=9105 SEQ=1153921280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB85EFA0000000001030307) Nov 28 04:04:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48828 DF PROTO=TCP SPT=55802 DPT=9105 SEQ=3116172900 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB86AFB0000000001030307) Nov 28 04:04:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9565 DF PROTO=TCP SPT=60390 DPT=9105 SEQ=1153921280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB876BA0000000001030307) Nov 28 04:04:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:04:36 localhost podman[105447]: 2025-11-28 09:04:36.745038496 +0000 UTC m=+0.091731276 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, release=1761123044, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}) Nov 28 04:04:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:04:36 localhost podman[105447]: 2025-11-28 09:04:36.784691677 +0000 UTC m=+0.131384426 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12) Nov 28 04:04:36 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:04:36 localhost podman[105465]: 2025-11-28 09:04:36.839553555 +0000 UTC m=+0.082283943 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible) Nov 28 04:04:37 localhost podman[105465]: 2025-11-28 09:04:37.096822685 +0000 UTC m=+0.339553113 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, config_id=tripleo_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc.) Nov 28 04:04:37 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:04:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30847 DF PROTO=TCP SPT=56696 DPT=9102 SEQ=1278786119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB887FB0000000001030307) Nov 28 04:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:04:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:04:39 localhost podman[105503]: 2025-11-28 09:04:39.495245225 +0000 UTC m=+0.091809057 container health_status 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=) Nov 28 04:04:39 localhost podman[105496]: 2025-11-28 09:04:39.542589962 +0000 UTC m=+0.145977404 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_id=tripleo_step3, tcib_managed=true, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public) Nov 28 04:04:39 localhost podman[105496]: 2025-11-28 09:04:39.551973722 +0000 UTC m=+0.155361204 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_id=tripleo_step3, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com) Nov 28 04:04:39 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:04:39 localhost podman[105503]: 2025-11-28 09:04:39.575216607 +0000 UTC m=+0.171780439 container exec_died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, vendor=Red Hat, Inc.) Nov 28 04:04:39 localhost podman[105503]: unhealthy Nov 28 04:04:39 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:39 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:04:39 localhost podman[105495]: 2025-11-28 09:04:39.591494868 +0000 UTC m=+0.198701958 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:04:39 localhost podman[105495]: 2025-11-28 09:04:39.623617846 +0000 UTC m=+0.230824956 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron) Nov 28 04:04:39 localhost podman[105543]: 2025-11-28 09:04:39.637506304 +0000 UTC m=+0.133711307 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, version=17.1.12, vcs-type=git) Nov 28 04:04:39 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:04:39 localhost podman[105497]: Error: container d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff is not running Nov 28 04:04:39 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Main process exited, code=exited, status=125/n/a Nov 28 04:04:39 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Failed with result 'exit-code'. Nov 28 04:04:40 localhost podman[105543]: 2025-11-28 09:04:40.013593091 +0000 UTC m=+0.509798094 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target) Nov 28 04:04:40 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:04:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:04:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:04:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:04:42 localhost podman[105596]: 2025-11-28 09:04:42.709694035 +0000 UTC m=+0.066123647 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, container_name=nova_compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible) Nov 28 04:04:42 localhost podman[105596]: 2025-11-28 09:04:42.730489734 +0000 UTC m=+0.086919386 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12) Nov 28 04:04:42 localhost podman[105596]: unhealthy Nov 28 04:04:42 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:42 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:04:42 localhost podman[105597]: 2025-11-28 09:04:42.787584902 +0000 UTC m=+0.137608317 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1761123044, io.buildah.version=1.41.4) Nov 28 04:04:42 localhost podman[105595]: 2025-11-28 09:04:42.827166461 +0000 UTC m=+0.185659317 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:04:42 localhost podman[105595]: 2025-11-28 09:04:42.844416151 +0000 UTC m=+0.202909047 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-type=git) Nov 28 04:04:42 localhost podman[105595]: unhealthy Nov 28 04:04:42 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:42 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:04:42 localhost podman[105597]: 2025-11-28 09:04:42.870613218 +0000 UTC m=+0.220636593 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, distribution-scope=public) Nov 28 04:04:42 localhost podman[105597]: unhealthy Nov 28 04:04:42 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:04:42 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:04:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9566 DF PROTO=TCP SPT=60390 DPT=9105 SEQ=1153921280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB896FA0000000001030307) Nov 28 04:04:43 localhost systemd[1]: tmp-crun.pX1ygc.mount: Deactivated successfully. Nov 28 04:04:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44103 DF PROTO=TCP SPT=48844 DPT=9101 SEQ=2307672429 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB89BD00000000001030307) Nov 28 04:04:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41000 DF PROTO=TCP SPT=33542 DPT=9100 SEQ=1773268596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8A4FA0000000001030307) Nov 28 04:04:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41001 DF PROTO=TCP SPT=33542 DPT=9100 SEQ=1773268596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8B4BA0000000001030307) Nov 28 04:04:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59803 DF PROTO=TCP SPT=54994 DPT=9105 SEQ=3520360375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8D0030000000001030307) Nov 28 04:04:58 localhost podman[105432]: time="2025-11-28T09:04:58Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL" Nov 28 04:04:58 localhost systemd[1]: libpod-d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.scope: Deactivated successfully. Nov 28 04:04:58 localhost systemd[1]: libpod-d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.scope: Consumed 5.798s CPU time. Nov 28 04:04:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:33:a3:19 MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45014 SEQ=27833604 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 28 04:04:58 localhost podman[105432]: 2025-11-28 09:04:58.099346667 +0000 UTC m=+42.111713380 container died d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, release=1761123044) Nov 28 04:04:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.timer: Deactivated successfully. Nov 28 04:04:58 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff. Nov 28 04:04:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Failed to open /run/systemd/transient/d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: No such file or directory Nov 28 04:04:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff-userdata-shm.mount: Deactivated successfully. Nov 28 04:04:58 localhost systemd[1]: var-lib-containers-storage-overlay-d613e9ce43651b4a22ba11f5bcafcb4dcc9b302834037925dc9c415fac8e707f-merged.mount: Deactivated successfully. Nov 28 04:04:58 localhost podman[105432]: 2025-11-28 09:04:58.157623001 +0000 UTC m=+42.169989664 container cleanup d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team) Nov 28 04:04:58 localhost podman[105432]: ceilometer_agent_compute Nov 28 04:04:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.timer: Failed to open /run/systemd/transient/d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.timer: No such file or directory Nov 28 04:04:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Failed to open /run/systemd/transient/d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: No such file or directory Nov 28 04:04:58 localhost podman[105732]: 2025-11-28 09:04:58.187828701 +0000 UTC m=+0.080535850 container cleanup d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, config_id=tripleo_step4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=) Nov 28 04:04:58 localhost systemd[1]: libpod-conmon-d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.scope: Deactivated successfully. Nov 28 04:04:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.timer: Failed to open /run/systemd/transient/d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.timer: No such file or directory Nov 28 04:04:58 localhost systemd[1]: d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: Failed to open /run/systemd/transient/d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff.service: No such file or directory Nov 28 04:04:58 localhost podman[105748]: 2025-11-28 09:04:58.317764571 +0000 UTC m=+0.071013477 container cleanup d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team) Nov 28 04:04:58 localhost podman[105748]: ceilometer_agent_compute Nov 28 04:04:58 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully. Nov 28 04:04:58 localhost systemd[1]: Stopped ceilometer_agent_compute container. Nov 28 04:04:58 localhost systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.121s CPU time, no IO. Nov 28 04:04:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:33:a3:19 MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45014 SEQ=27833604 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 28 04:04:59 localhost python3.9[105852]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:04:59 localhost systemd[1]: Reloading. Nov 28 04:04:59 localhost systemd-rc-local-generator[105878]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:04:59 localhost systemd-sysv-generator[105883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:04:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:04:59 localhost systemd[1]: Stopping ceilometer_agent_ipmi container... Nov 28 04:05:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6015 DF PROTO=TCP SPT=55908 DPT=9882 SEQ=1613149440 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8E13A0000000001030307) Nov 28 04:05:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59806 DF PROTO=TCP SPT=54994 DPT=9105 SEQ=3520360375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8EBBB0000000001030307) Nov 28 04:05:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:05:06 localhost podman[105906]: 2025-11-28 09:05:06.995819413 +0000 UTC m=+0.093927301 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd) Nov 28 04:05:07 localhost podman[105906]: 2025-11-28 09:05:07.007183843 +0000 UTC m=+0.105291711 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible) Nov 28 04:05:07 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:05:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:05:07 localhost podman[105926]: 2025-11-28 09:05:07.997267581 +0000 UTC m=+0.095078448 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:05:08 localhost podman[105926]: 2025-11-28 09:05:08.190954723 +0000 UTC m=+0.288765610 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:05:08 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:05:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37429 DF PROTO=TCP SPT=45106 DPT=9102 SEQ=56186255 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB8FD3A0000000001030307) Nov 28 04:05:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:05:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:05:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:05:09 localhost podman[105954]: 2025-11-28 09:05:09.973019748 +0000 UTC m=+0.081919322 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:05:10 localhost podman[105954]: 2025-11-28 09:05:10.013457712 +0000 UTC m=+0.122357236 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, architecture=x86_64, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, container_name=logrotate_crond, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron) Nov 28 04:05:10 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:05:10 localhost podman[105956]: 2025-11-28 09:05:10.038778852 +0000 UTC m=+0.143678264 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4) Nov 28 04:05:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:05:10 localhost podman[105956]: 2025-11-28 09:05:10.085848491 +0000 UTC m=+0.190747883 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc.) Nov 28 04:05:10 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:05:10 localhost podman[105955]: Error: container 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 is not running Nov 28 04:05:10 localhost podman[106004]: 2025-11-28 09:05:10.159944002 +0000 UTC m=+0.088000020 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:05:10 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=125/n/a Nov 28 04:05:10 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:05:10 localhost podman[106004]: 2025-11-28 09:05:10.55135345 +0000 UTC m=+0.479409498 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target) Nov 28 04:05:10 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:05:11 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:33:a3:19 MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45014 SEQ=27833604 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 28 04:05:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:05:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:05:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:05:12 localhost podman[106028]: 2025-11-28 09:05:12.987996558 +0000 UTC m=+0.094581173 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git) Nov 28 04:05:13 localhost podman[106028]: 2025-11-28 09:05:13.029442624 +0000 UTC m=+0.136027259 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git) Nov 28 04:05:13 localhost podman[106028]: unhealthy Nov 28 04:05:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:05:13 localhost podman[106029]: 2025-11-28 09:05:13.049111079 +0000 UTC m=+0.149797362 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, tcib_managed=true, version=17.1.12) Nov 28 04:05:13 localhost podman[106057]: 2025-11-28 09:05:13.101807352 +0000 UTC m=+0.102761235 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:05:13 localhost podman[106057]: 2025-11-28 09:05:13.117724731 +0000 UTC m=+0.118678534 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 04:05:13 localhost podman[106057]: unhealthy Nov 28 04:05:13 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:13 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:05:13 localhost podman[106029]: 2025-11-28 09:05:13.170609569 +0000 UTC m=+0.271295832 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4) Nov 28 04:05:13 localhost podman[106029]: unhealthy Nov 28 04:05:13 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:13 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:05:13 localhost systemd[1]: tmp-crun.p6LUXg.mount: Deactivated successfully. Nov 28 04:05:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6017 DF PROTO=TCP SPT=55908 DPT=9882 SEQ=1613149440 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB910FA0000000001030307) Nov 28 04:05:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62494 DF PROTO=TCP SPT=54982 DPT=9100 SEQ=3662909776 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB91A3A0000000001030307) Nov 28 04:05:17 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 04:05:17 localhost recover_tripleo_nova_virtqemud[106091]: 62642 Nov 28 04:05:17 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 04:05:17 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 04:05:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62495 DF PROTO=TCP SPT=54982 DPT=9100 SEQ=3662909776 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB929FA0000000001030307) Nov 28 04:05:25 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:33:a3:19 MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=45014 SEQ=27833604 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 Nov 28 04:05:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13697 DF PROTO=TCP SPT=50388 DPT=9105 SEQ=3033705835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB945340000000001030307) Nov 28 04:05:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13698 DF PROTO=TCP SPT=50388 DPT=9105 SEQ=3033705835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9493A0000000001030307) Nov 28 04:05:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9568 DF PROTO=TCP SPT=60390 DPT=9105 SEQ=1153921280 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB954FA0000000001030307) Nov 28 04:05:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13700 DF PROTO=TCP SPT=50388 DPT=9105 SEQ=3033705835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB960FB0000000001030307) Nov 28 04:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:05:37 localhost podman[106092]: 2025-11-28 09:05:37.475822204 +0000 UTC m=+0.083374666 container health_status cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=collectd, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team) Nov 28 04:05:37 localhost podman[106092]: 2025-11-28 09:05:37.483280724 +0000 UTC m=+0.090833186 container exec_died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z) Nov 28 04:05:37 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Deactivated successfully. Nov 28 04:05:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:05:38 localhost podman[106113]: 2025-11-28 09:05:38.973950301 +0000 UTC m=+0.076790704 container health_status 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true) Nov 28 04:05:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47407 DF PROTO=TCP SPT=59842 DPT=9102 SEQ=3243152348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9727A0000000001030307) Nov 28 04:05:39 localhost podman[106113]: 2025-11-28 09:05:39.185409391 +0000 UTC m=+0.288249754 container exec_died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a) Nov 28 04:05:39 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Deactivated successfully. Nov 28 04:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:05:40 localhost podman[106142]: 2025-11-28 09:05:40.988650749 +0000 UTC m=+0.091293051 container health_status 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, io.openshift.expose-services=, container_name=logrotate_crond, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:05:41 localhost podman[106143]: Error: container 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 is not running Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Main process exited, code=exited, status=125/n/a Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed with result 'exit-code'. Nov 28 04:05:41 localhost podman[106148]: 2025-11-28 09:05:41.050942627 +0000 UTC m=+0.141942190 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:05:41 localhost podman[106142]: 2025-11-28 09:05:41.074461651 +0000 UTC m=+0.177103963 container exec_died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 04:05:41 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Deactivated successfully. Nov 28 04:05:41 localhost podman[106144]: 2025-11-28 09:05:41.091860316 +0000 UTC m=+0.186003456 container health_status 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid) Nov 28 04:05:41 localhost podman[106144]: 2025-11-28 09:05:41.10334687 +0000 UTC m=+0.197489960 container exec_died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=) Nov 28 04:05:41 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Deactivated successfully. Nov 28 04:05:41 localhost podman[106148]: 2025-11-28 09:05:41.43081274 +0000 UTC m=+0.521812263 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute) Nov 28 04:05:41 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:05:41 localhost podman[105893]: time="2025-11-28T09:05:41Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL" Nov 28 04:05:41 localhost systemd[1]: libpod-7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.scope: Deactivated successfully. Nov 28 04:05:41 localhost systemd[1]: libpod-7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.scope: Consumed 6.400s CPU time. Nov 28 04:05:41 localhost podman[105893]: 2025-11-28 09:05:41.7446318 +0000 UTC m=+42.109114386 container died 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi) Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.timer: Deactivated successfully. Nov 28 04:05:41 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6. Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed to open /run/systemd/transient/7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: No such file or directory Nov 28 04:05:41 localhost podman[105893]: 2025-11-28 09:05:41.799035026 +0000 UTC m=+42.163517582 container cleanup 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi) Nov 28 04:05:41 localhost podman[105893]: ceilometer_agent_ipmi Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.timer: Failed to open /run/systemd/transient/7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.timer: No such file or directory Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed to open /run/systemd/transient/7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: No such file or directory Nov 28 04:05:41 localhost podman[106214]: 2025-11-28 09:05:41.845451154 +0000 UTC m=+0.085822892 container cleanup 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi) Nov 28 04:05:41 localhost systemd[1]: libpod-conmon-7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.scope: Deactivated successfully. Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.timer: Failed to open /run/systemd/transient/7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.timer: No such file or directory Nov 28 04:05:41 localhost systemd[1]: 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: Failed to open /run/systemd/transient/7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6.service: No such file or directory Nov 28 04:05:41 localhost podman[106230]: 2025-11-28 09:05:41.926244651 +0000 UTC m=+0.044064398 container cleanup 7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}) Nov 28 04:05:41 localhost podman[106230]: ceilometer_agent_ipmi Nov 28 04:05:41 localhost systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully. Nov 28 04:05:41 localhost systemd[1]: Stopped ceilometer_agent_ipmi container. Nov 28 04:05:41 localhost systemd[1]: tmp-crun.QykCaf.mount: Deactivated successfully. Nov 28 04:05:41 localhost systemd[1]: var-lib-containers-storage-overlay-b2363e42c8cc93f560c242c278a1b76f810df60301763e880790aefc5b17b52f-merged.mount: Deactivated successfully. Nov 28 04:05:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d61a684c799c99e7df2b110e7b08d8c3b10a949b562f2d88d54d1eb7d8503e6-userdata-shm.mount: Deactivated successfully. Nov 28 04:05:42 localhost python3.9[106334]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:42 localhost systemd[1]: Reloading. Nov 28 04:05:42 localhost systemd-rc-local-generator[106353]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:05:42 localhost systemd-sysv-generator[106360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:05:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13701 DF PROTO=TCP SPT=50388 DPT=9105 SEQ=3033705835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB980FA0000000001030307) Nov 28 04:05:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:05:43 localhost systemd[1]: Stopping collectd container... Nov 28 04:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:05:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:05:43 localhost podman[106398]: 2025-11-28 09:05:43.269033765 +0000 UTC m=+0.102833786 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true) Nov 28 04:05:43 localhost podman[106373]: 2025-11-28 09:05:43.228137527 +0000 UTC m=+0.147256074 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:05:43 localhost podman[106398]: 2025-11-28 09:05:43.284476201 +0000 UTC m=+0.118276222 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-type=git, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c) Nov 28 04:05:43 localhost podman[106398]: unhealthy Nov 28 04:05:43 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:43 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:05:43 localhost podman[106373]: 2025-11-28 09:05:43.309433659 +0000 UTC m=+0.228552276 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64) Nov 28 04:05:43 localhost podman[106373]: unhealthy Nov 28 04:05:43 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:43 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:05:43 localhost podman[106417]: 2025-11-28 09:05:43.373404369 +0000 UTC m=+0.136861294 container health_status ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible) Nov 28 04:05:43 localhost podman[106417]: 2025-11-28 09:05:43.395431296 +0000 UTC m=+0.158888241 container exec_died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, container_name=nova_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:05:43 localhost podman[106417]: unhealthy Nov 28 04:05:43 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:43 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:05:44 localhost systemd[1]: tmp-crun.Egtyr6.mount: Deactivated successfully. Nov 28 04:05:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28554 DF PROTO=TCP SPT=45652 DPT=9101 SEQ=3547471805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB986300000000001030307) Nov 28 04:05:44 localhost systemd[1]: libpod-cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.scope: Deactivated successfully. Nov 28 04:05:44 localhost systemd[1]: libpod-cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.scope: Consumed 2.115s CPU time. Nov 28 04:05:44 localhost podman[106375]: 2025-11-28 09:05:44.558803028 +0000 UTC m=+1.474140338 container stop cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z) Nov 28 04:05:44 localhost podman[106375]: 2025-11-28 09:05:44.595293401 +0000 UTC m=+1.510630731 container died cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:05:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.timer: Deactivated successfully. Nov 28 04:05:44 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277. Nov 28 04:05:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Failed to open /run/systemd/transient/cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: No such file or directory Nov 28 04:05:44 localhost systemd[1]: var-lib-containers-storage-overlay-cb78a9787fbfdee8df647dff935d3e6e34a25076546a1ccbc8a68d8c48f6925c-merged.mount: Deactivated successfully. Nov 28 04:05:44 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277-userdata-shm.mount: Deactivated successfully. Nov 28 04:05:44 localhost podman[106375]: 2025-11-28 09:05:44.689120099 +0000 UTC m=+1.604457419 container cleanup cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 28 04:05:44 localhost podman[106375]: collectd Nov 28 04:05:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.timer: Failed to open /run/systemd/transient/cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.timer: No such file or directory Nov 28 04:05:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Failed to open /run/systemd/transient/cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: No such file or directory Nov 28 04:05:44 localhost podman[106450]: 2025-11-28 09:05:44.709867698 +0000 UTC m=+0.138712611 container cleanup cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd) Nov 28 04:05:44 localhost systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:05:44 localhost systemd[1]: libpod-conmon-cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.scope: Deactivated successfully. Nov 28 04:05:44 localhost podman[106476]: error opening file `/run/crun/cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277/status`: No such file or directory Nov 28 04:05:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.timer: Failed to open /run/systemd/transient/cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.timer: No such file or directory Nov 28 04:05:44 localhost systemd[1]: cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: Failed to open /run/systemd/transient/cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277.service: No such file or directory Nov 28 04:05:44 localhost podman[106465]: 2025-11-28 09:05:44.807015628 +0000 UTC m=+0.067411226 container cleanup cb823bb3ef2bd83aa7ef957f3aca91a84b39b4427f978ef32db4305d4edb2277 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:05:44 localhost podman[106465]: collectd Nov 28 04:05:44 localhost systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'. Nov 28 04:05:44 localhost systemd[1]: Stopped collectd container. Nov 28 04:05:45 localhost python3.9[106569]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:45 localhost systemd[1]: Reloading. Nov 28 04:05:45 localhost systemd-rc-local-generator[106627]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:05:45 localhost systemd-sysv-generator[106630]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:05:45 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:05:46 localhost systemd[1]: Stopping iscsid container... Nov 28 04:05:46 localhost systemd[1]: libpod-9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.scope: Deactivated successfully. Nov 28 04:05:46 localhost systemd[1]: libpod-9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.scope: Consumed 1.070s CPU time. Nov 28 04:05:46 localhost podman[106655]: 2025-11-28 09:05:46.206954552 +0000 UTC m=+0.095090398 container died 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true) Nov 28 04:05:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.timer: Deactivated successfully. Nov 28 04:05:46 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360. Nov 28 04:05:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Failed to open /run/systemd/transient/9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: No such file or directory Nov 28 04:05:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360-userdata-shm.mount: Deactivated successfully. Nov 28 04:05:46 localhost systemd[1]: var-lib-containers-storage-overlay-6e24aa22dcc3c3aeaf326f993725e399e8f3215a32c5fb5c28a2698bed898907-merged.mount: Deactivated successfully. Nov 28 04:05:46 localhost podman[106655]: 2025-11-28 09:05:46.263639347 +0000 UTC m=+0.151775103 container cleanup 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public) Nov 28 04:05:46 localhost podman[106655]: iscsid Nov 28 04:05:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.timer: Failed to open /run/systemd/transient/9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.timer: No such file or directory Nov 28 04:05:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Failed to open /run/systemd/transient/9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: No such file or directory Nov 28 04:05:46 localhost podman[106672]: 2025-11-28 09:05:46.290939847 +0000 UTC m=+0.074214985 container cleanup 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, io.buildah.version=1.41.4) Nov 28 04:05:46 localhost systemd[1]: libpod-conmon-9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.scope: Deactivated successfully. Nov 28 04:05:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.timer: Failed to open /run/systemd/transient/9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.timer: No such file or directory Nov 28 04:05:46 localhost systemd[1]: 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: Failed to open /run/systemd/transient/9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360.service: No such file or directory Nov 28 04:05:46 localhost podman[106686]: 2025-11-28 09:05:46.384484227 +0000 UTC m=+0.063285299 container cleanup 9be6f10185ad3e52f6217d1c4d2aaa05051206eedd7df0d205999efaa04c4360 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid) Nov 28 04:05:46 localhost podman[106686]: iscsid Nov 28 04:05:46 localhost systemd[1]: tripleo_iscsid.service: Deactivated successfully. Nov 28 04:05:46 localhost systemd[1]: Stopped iscsid container. Nov 28 04:05:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15319 DF PROTO=TCP SPT=34424 DPT=9100 SEQ=1906070116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB98F7A0000000001030307) Nov 28 04:05:47 localhost python3.9[106802]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:47 localhost systemd[1]: Reloading. Nov 28 04:05:47 localhost systemd-rc-local-generator[106841]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:05:47 localhost systemd-sysv-generator[106845]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:05:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:05:47 localhost systemd[1]: Stopping logrotate_crond container... Nov 28 04:05:47 localhost systemd[1]: libpod-719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.scope: Deactivated successfully. Nov 28 04:05:47 localhost systemd[1]: libpod-719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.scope: Consumed 1.014s CPU time. Nov 28 04:05:47 localhost podman[106858]: 2025-11-28 09:05:47.680316426 +0000 UTC m=+0.078063674 container died 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, com.redhat.component=openstack-cron-container, vcs-type=git) Nov 28 04:05:47 localhost systemd[1]: tmp-crun.N0lFj8.mount: Deactivated successfully. Nov 28 04:05:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.timer: Deactivated successfully. Nov 28 04:05:47 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae. Nov 28 04:05:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Failed to open /run/systemd/transient/719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: No such file or directory Nov 28 04:05:47 localhost podman[106858]: 2025-11-28 09:05:47.802169338 +0000 UTC m=+0.199916586 container cleanup 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}) Nov 28 04:05:47 localhost podman[106858]: logrotate_crond Nov 28 04:05:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.timer: Failed to open /run/systemd/transient/719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.timer: No such file or directory Nov 28 04:05:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Failed to open /run/systemd/transient/719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: No such file or directory Nov 28 04:05:47 localhost podman[106871]: 2025-11-28 09:05:47.827234548 +0000 UTC m=+0.131213889 container cleanup 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron) Nov 28 04:05:47 localhost systemd[1]: libpod-conmon-719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.scope: Deactivated successfully. Nov 28 04:05:47 localhost podman[106900]: error opening file `/run/crun/719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae/status`: No such file or directory Nov 28 04:05:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.timer: Failed to open /run/systemd/transient/719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.timer: No such file or directory Nov 28 04:05:47 localhost systemd[1]: 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: Failed to open /run/systemd/transient/719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae.service: No such file or directory Nov 28 04:05:47 localhost podman[106889]: 2025-11-28 09:05:47.945165509 +0000 UTC m=+0.079142218 container cleanup 719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron) Nov 28 04:05:47 localhost podman[106889]: logrotate_crond Nov 28 04:05:47 localhost systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully. Nov 28 04:05:47 localhost systemd[1]: Stopped logrotate_crond container. Nov 28 04:05:48 localhost systemd[1]: tmp-crun.4gzU5W.mount: Deactivated successfully. Nov 28 04:05:48 localhost systemd[1]: var-lib-containers-storage-overlay-93bf0314bbd4063198be021c760bb47b8172c6cfa3163da2b90a6f202605824f-merged.mount: Deactivated successfully. Nov 28 04:05:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-719548fc87f254c80adab6f249c889f61ec2ef0efd715b6e26934904f5914eae-userdata-shm.mount: Deactivated successfully. Nov 28 04:05:48 localhost python3.9[106995]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:48 localhost systemd[1]: Reloading. Nov 28 04:05:48 localhost systemd-rc-local-generator[107018]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:05:48 localhost systemd-sysv-generator[107025]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:05:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:05:49 localhost systemd[1]: Stopping metrics_qdr container... Nov 28 04:05:49 localhost kernel: qdrouterd[55332]: segfault at 0 ip 00007f678c8617cb sp 00007ffdcb0a71a0 error 4 in libc.so.6[7f678c7fe000+175000] Nov 28 04:05:49 localhost kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9 Nov 28 04:05:49 localhost systemd[1]: Created slice Slice /system/systemd-coredump. Nov 28 04:05:49 localhost systemd[1]: Started Process Core Dump (PID 107050/UID 0). Nov 28 04:05:49 localhost systemd-coredump[107051]: Resource limits disable core dumping for process 55332 (qdrouterd). Nov 28 04:05:49 localhost systemd-coredump[107051]: Process 55332 (qdrouterd) of user 42465 dumped core. Nov 28 04:05:49 localhost systemd[1]: systemd-coredump@0-107050-0.service: Deactivated successfully. Nov 28 04:05:49 localhost podman[107036]: 2025-11-28 09:05:49.455906673 +0000 UTC m=+0.233047815 container died 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64) Nov 28 04:05:49 localhost systemd[1]: libpod-9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.scope: Deactivated successfully. Nov 28 04:05:49 localhost systemd[1]: libpod-9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.scope: Consumed 28.091s CPU time. Nov 28 04:05:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.timer: Deactivated successfully. Nov 28 04:05:49 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74. Nov 28 04:05:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Failed to open /run/systemd/transient/9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: No such file or directory Nov 28 04:05:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74-userdata-shm.mount: Deactivated successfully. Nov 28 04:05:49 localhost podman[107036]: 2025-11-28 09:05:49.510280847 +0000 UTC m=+0.287421979 container cleanup 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4) Nov 28 04:05:49 localhost podman[107036]: metrics_qdr Nov 28 04:05:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.timer: Failed to open /run/systemd/transient/9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.timer: No such file or directory Nov 28 04:05:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Failed to open /run/systemd/transient/9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: No such file or directory Nov 28 04:05:49 localhost podman[107055]: 2025-11-28 09:05:49.534501132 +0000 UTC m=+0.063872177 container cleanup 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 04:05:49 localhost systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a Nov 28 04:05:49 localhost systemd[1]: libpod-conmon-9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.scope: Deactivated successfully. Nov 28 04:05:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.timer: Failed to open /run/systemd/transient/9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.timer: No such file or directory Nov 28 04:05:49 localhost systemd[1]: 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: Failed to open /run/systemd/transient/9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74.service: No such file or directory Nov 28 04:05:49 localhost podman[107067]: 2025-11-28 09:05:49.643592931 +0000 UTC m=+0.074038261 container cleanup 9ebc6c348d37137661c5094c88cfc7a2b1d54aa7c60a526cd6427459d35a7b74 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6e6d33b0e4909c73f2f7adca3bc870a0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd) Nov 28 04:05:49 localhost podman[107067]: metrics_qdr Nov 28 04:05:49 localhost systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'. Nov 28 04:05:49 localhost systemd[1]: Stopped metrics_qdr container. Nov 28 04:05:49 localhost systemd[1]: var-lib-containers-storage-overlay-876187b8bc68a02fc79261d7a49dfade5cc37ca730d23b4f758fcf788c522d06-merged.mount: Deactivated successfully. Nov 28 04:05:50 localhost python3.9[107170]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15320 DF PROTO=TCP SPT=34424 DPT=9100 SEQ=1906070116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB99F3A0000000001030307) Nov 28 04:05:51 localhost python3.9[107263]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:51 localhost python3.9[107356]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:52 localhost python3.9[107449]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:05:52 localhost systemd[1]: Reloading. Nov 28 04:05:52 localhost systemd-rc-local-generator[107478]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:05:52 localhost systemd-sysv-generator[107481]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:05:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:05:53 localhost systemd[1]: Stopping nova_compute container... Nov 28 04:05:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24302 DF PROTO=TCP SPT=33784 DPT=9105 SEQ=2010536769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9BA630000000001030307) Nov 28 04:05:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24303 DF PROTO=TCP SPT=33784 DPT=9105 SEQ=2010536769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9BE7B0000000001030307) Nov 28 04:05:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15321 DF PROTO=TCP SPT=34424 DPT=9100 SEQ=1906070116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9BEFA0000000001030307) Nov 28 04:06:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59809 DF PROTO=TCP SPT=54994 DPT=9105 SEQ=3520360375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9CAFA0000000001030307) Nov 28 04:06:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24305 DF PROTO=TCP SPT=33784 DPT=9105 SEQ=2010536769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9D63B0000000001030307) Nov 28 04:06:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25369 DF PROTO=TCP SPT=47772 DPT=9102 SEQ=3617329224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9E7BA0000000001030307) Nov 28 04:06:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:06:11 localhost systemd[1]: tmp-crun.WTCMYG.mount: Deactivated successfully. Nov 28 04:06:11 localhost podman[107504]: 2025-11-28 09:06:11.729757801 +0000 UTC m=+0.085881445 container health_status e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, container_name=nova_migration_target, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z) Nov 28 04:06:12 localhost podman[107504]: 2025-11-28 09:06:12.128382781 +0000 UTC m=+0.484506405 container exec_died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git) Nov 28 04:06:12 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Deactivated successfully. Nov 28 04:06:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24306 DF PROTO=TCP SPT=33784 DPT=9105 SEQ=2010536769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9F6FA0000000001030307) Nov 28 04:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:06:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:06:13 localhost systemd[1]: tmp-crun.v6CapV.mount: Deactivated successfully. Nov 28 04:06:13 localhost podman[107528]: 2025-11-28 09:06:13.960943571 +0000 UTC m=+0.068698355 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, tcib_managed=true) Nov 28 04:06:14 localhost podman[107528]: 2025-11-28 09:06:14.000669854 +0000 UTC m=+0.108424708 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 04:06:14 localhost podman[107528]: unhealthy Nov 28 04:06:14 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:06:14 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:06:14 localhost podman[107529]: Error: container ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 is not running Nov 28 04:06:14 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Main process exited, code=exited, status=125/n/a Nov 28 04:06:14 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed with result 'exit-code'. Nov 28 04:06:14 localhost podman[107530]: 2025-11-28 09:06:14.00440911 +0000 UTC m=+0.105710536 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64, release=1761123044, vcs-type=git, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public) Nov 28 04:06:14 localhost podman[107530]: 2025-11-28 09:06:14.08630607 +0000 UTC m=+0.187607436 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z) Nov 28 04:06:14 localhost podman[107530]: unhealthy Nov 28 04:06:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:06:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:06:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58646 DF PROTO=TCP SPT=47690 DPT=9882 SEQ=3581178474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AB9FAFB0000000001030307) Nov 28 04:06:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48906 DF PROTO=TCP SPT=33522 DPT=9100 SEQ=460410310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA04BB0000000001030307) Nov 28 04:06:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48907 DF PROTO=TCP SPT=33522 DPT=9100 SEQ=460410310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA147A0000000001030307) Nov 28 04:06:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16889 DF PROTO=TCP SPT=59482 DPT=9105 SEQ=3206880083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA2F920000000001030307) Nov 28 04:06:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16890 DF PROTO=TCP SPT=59482 DPT=9105 SEQ=3206880083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA33BA0000000001030307) Nov 28 04:06:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15052 DF PROTO=TCP SPT=38194 DPT=9882 SEQ=2964310619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA34BC0000000001030307) Nov 28 04:06:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13703 DF PROTO=TCP SPT=50388 DPT=9105 SEQ=3033705835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA3EFA0000000001030307) Nov 28 04:06:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16892 DF PROTO=TCP SPT=59482 DPT=9105 SEQ=3206880083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA4B7A0000000001030307) Nov 28 04:06:35 localhost podman[107490]: time="2025-11-28T09:06:35Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL" Nov 28 04:06:35 localhost systemd[1]: libpod-ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.scope: Deactivated successfully. Nov 28 04:06:35 localhost systemd[1]: libpod-ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.scope: Consumed 26.850s CPU time. Nov 28 04:06:35 localhost podman[107490]: 2025-11-28 09:06:35.177276797 +0000 UTC m=+42.110968467 container died ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible) Nov 28 04:06:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.timer: Deactivated successfully. Nov 28 04:06:35 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0. Nov 28 04:06:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed to open /run/systemd/transient/ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: No such file or directory Nov 28 04:06:35 localhost systemd[1]: var-lib-containers-storage-overlay-5442f5016f7f6fcccf64f4788496955298bf5e3ac09b77bd37d30b08717aca4a-merged.mount: Deactivated successfully. Nov 28 04:06:35 localhost podman[107490]: 2025-11-28 09:06:35.232656122 +0000 UTC m=+42.166347762 container cleanup ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible) Nov 28 04:06:35 localhost podman[107490]: nova_compute Nov 28 04:06:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.timer: Failed to open /run/systemd/transient/ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.timer: No such file or directory Nov 28 04:06:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed to open /run/systemd/transient/ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: No such file or directory Nov 28 04:06:35 localhost podman[107580]: 2025-11-28 09:06:35.310546849 +0000 UTC m=+0.125903007 container cleanup ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_compute) Nov 28 04:06:35 localhost systemd[1]: libpod-conmon-ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.scope: Deactivated successfully. Nov 28 04:06:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.timer: Failed to open /run/systemd/transient/ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.timer: No such file or directory Nov 28 04:06:35 localhost systemd[1]: ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: Failed to open /run/systemd/transient/ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0.service: No such file or directory Nov 28 04:06:35 localhost podman[107596]: 2025-11-28 09:06:35.416497511 +0000 UTC m=+0.071485091 container cleanup ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1) Nov 28 04:06:35 localhost podman[107596]: nova_compute Nov 28 04:06:35 localhost systemd[1]: tripleo_nova_compute.service: Deactivated successfully. Nov 28 04:06:35 localhost systemd[1]: Stopped nova_compute container. Nov 28 04:06:35 localhost systemd[1]: tripleo_nova_compute.service: Consumed 1.146s CPU time, no IO. Nov 28 04:06:36 localhost python3.9[107698]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:06:36 localhost systemd[1]: Reloading. Nov 28 04:06:36 localhost systemd-sysv-generator[107731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:06:36 localhost systemd-rc-local-generator[107725]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:06:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:06:36 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 04:06:36 localhost systemd[1]: Starting dnf makecache... Nov 28 04:06:36 localhost systemd[1]: Stopping nova_migration_target container... Nov 28 04:06:36 localhost recover_tripleo_nova_virtqemud[107741]: 62642 Nov 28 04:06:36 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 04:06:36 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 04:06:36 localhost systemd[1]: libpod-e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.scope: Deactivated successfully. Nov 28 04:06:36 localhost systemd[1]: libpod-e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.scope: Consumed 32.824s CPU time. Nov 28 04:06:36 localhost podman[107742]: 2025-11-28 09:06:36.858281483 +0000 UTC m=+0.078710824 container died e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:06:36 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.timer: Deactivated successfully. Nov 28 04:06:36 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860. Nov 28 04:06:36 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Failed to open /run/systemd/transient/e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: No such file or directory Nov 28 04:06:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860-userdata-shm.mount: Deactivated successfully. Nov 28 04:06:36 localhost systemd[1]: var-lib-containers-storage-overlay-3850d276a9594c52a78e85d7b58db016dc835caf89f3a263b0f9d37a3754a60d-merged.mount: Deactivated successfully. Nov 28 04:06:36 localhost podman[107742]: 2025-11-28 09:06:36.911217382 +0000 UTC m=+0.131646693 container cleanup e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, container_name=nova_migration_target, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:06:36 localhost podman[107742]: nova_migration_target Nov 28 04:06:36 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.timer: Failed to open /run/systemd/transient/e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.timer: No such file or directory Nov 28 04:06:36 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Failed to open /run/systemd/transient/e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: No such file or directory Nov 28 04:06:36 localhost podman[107756]: 2025-11-28 09:06:36.936322285 +0000 UTC m=+0.074130583 container cleanup e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute) Nov 28 04:06:36 localhost systemd[1]: libpod-conmon-e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.scope: Deactivated successfully. Nov 28 04:06:37 localhost dnf[107739]: Updating Subscription Management repositories. Nov 28 04:06:37 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.timer: Failed to open /run/systemd/transient/e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.timer: No such file or directory Nov 28 04:06:37 localhost systemd[1]: e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: Failed to open /run/systemd/transient/e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860.service: No such file or directory Nov 28 04:06:37 localhost podman[107771]: 2025-11-28 09:06:37.032925538 +0000 UTC m=+0.068027664 container cleanup e9b02b0c931bf0c269a9a5374b3e037ce986d87a003b12282cbd56e201e75860 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4) Nov 28 04:06:37 localhost podman[107771]: nova_migration_target Nov 28 04:06:37 localhost systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully. Nov 28 04:06:37 localhost systemd[1]: Stopped nova_migration_target container. Nov 28 04:06:37 localhost python3.9[107874]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:06:37 localhost systemd[1]: Reloading. Nov 28 04:06:37 localhost systemd-rc-local-generator[107901]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:06:37 localhost systemd-sysv-generator[107906]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:06:38 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:06:38 localhost systemd[1]: Stopping nova_virtlogd_wrapper container... Nov 28 04:06:38 localhost systemd[1]: tmp-crun.2abe5z.mount: Deactivated successfully. Nov 28 04:06:38 localhost systemd[1]: libpod-2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711.scope: Deactivated successfully. Nov 28 04:06:38 localhost podman[107915]: 2025-11-28 09:06:38.251800248 +0000 UTC m=+0.086930707 container died 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, maintainer=OpenStack TripleO Team, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d) Nov 28 04:06:38 localhost podman[107915]: 2025-11-28 09:06:38.29083181 +0000 UTC m=+0.125962269 container cleanup 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:06:38 localhost podman[107915]: nova_virtlogd_wrapper Nov 28 04:06:38 localhost podman[107928]: 2025-11-28 09:06:38.323215737 +0000 UTC m=+0.066574840 container cleanup 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, vcs-type=git, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtlogd_wrapper, io.buildah.version=1.41.4) Nov 28 04:06:38 localhost dnf[107739]: Metadata cache refreshed recently. Nov 28 04:06:38 localhost systemd[1]: dnf-makecache.service: Deactivated successfully. Nov 28 04:06:38 localhost systemd[1]: Finished dnf makecache. Nov 28 04:06:38 localhost systemd[1]: dnf-makecache.service: Consumed 1.973s CPU time. Nov 28 04:06:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56580 DF PROTO=TCP SPT=47098 DPT=9102 SEQ=1094284258 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA5CBA0000000001030307) Nov 28 04:06:39 localhost systemd[1]: tmp-crun.8JZiqw.mount: Deactivated successfully. Nov 28 04:06:39 localhost systemd[1]: var-lib-containers-storage-overlay-58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037-merged.mount: Deactivated successfully. Nov 28 04:06:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711-userdata-shm.mount: Deactivated successfully. Nov 28 04:06:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16893 DF PROTO=TCP SPT=59482 DPT=9105 SEQ=3206880083 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA6AFB0000000001030307) Nov 28 04:06:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64054 DF PROTO=TCP SPT=41116 DPT=9101 SEQ=214251831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA70920000000001030307) Nov 28 04:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:06:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:06:44 localhost podman[107944]: 2025-11-28 09:06:44.480731451 +0000 UTC m=+0.086209875 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, architecture=x86_64, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 04:06:44 localhost podman[107943]: 2025-11-28 09:06:44.531244486 +0000 UTC m=+0.135792551 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=ovn_controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272) Nov 28 04:06:44 localhost podman[107944]: 2025-11-28 09:06:44.554306656 +0000 UTC m=+0.159785090 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, config_id=tripleo_step4) Nov 28 04:06:44 localhost podman[107944]: unhealthy Nov 28 04:06:44 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:06:44 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:06:44 localhost podman[107943]: 2025-11-28 09:06:44.570768063 +0000 UTC m=+0.175316138 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller) Nov 28 04:06:44 localhost podman[107943]: unhealthy Nov 28 04:06:44 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:06:44 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:06:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12746 DF PROTO=TCP SPT=45140 DPT=9100 SEQ=3843044834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA79BA0000000001030307) Nov 28 04:06:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12747 DF PROTO=TCP SPT=45140 DPT=9100 SEQ=3843044834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABA897B0000000001030307) Nov 28 04:06:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14296 DF PROTO=TCP SPT=44144 DPT=9105 SEQ=3700151614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAA4C30000000001030307) Nov 28 04:06:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14297 DF PROTO=TCP SPT=44144 DPT=9105 SEQ=3700151614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAA8BA0000000001030307) Nov 28 04:06:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12748 DF PROTO=TCP SPT=45140 DPT=9100 SEQ=3843044834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAA8FA0000000001030307) Nov 28 04:07:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24308 DF PROTO=TCP SPT=33784 DPT=9105 SEQ=2010536769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAB4FB0000000001030307) Nov 28 04:07:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14299 DF PROTO=TCP SPT=44144 DPT=9105 SEQ=3700151614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAC07B0000000001030307) Nov 28 04:07:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56522 DF PROTO=TCP SPT=49498 DPT=9102 SEQ=237804585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAD1FA0000000001030307) Nov 28 04:07:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14300 DF PROTO=TCP SPT=44144 DPT=9105 SEQ=3700151614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAE0FA0000000001030307) Nov 28 04:07:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15402 DF PROTO=TCP SPT=55336 DPT=9101 SEQ=43871835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAE5C00000000001030307) Nov 28 04:07:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:07:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:07:14 localhost podman[108062]: 2025-11-28 09:07:14.738174294 +0000 UTC m=+0.084407830 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:07:14 localhost podman[108061]: 2025-11-28 09:07:14.786606885 +0000 UTC m=+0.134428149 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, vcs-type=git, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, release=1761123044) Nov 28 04:07:14 localhost podman[108062]: 2025-11-28 09:07:14.805655141 +0000 UTC m=+0.151888627 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=) Nov 28 04:07:14 localhost podman[108062]: unhealthy Nov 28 04:07:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:07:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:07:14 localhost podman[108061]: 2025-11-28 09:07:14.826941626 +0000 UTC m=+0.174762860 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, version=17.1.12, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_controller) Nov 28 04:07:14 localhost podman[108061]: unhealthy Nov 28 04:07:14 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:07:14 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:07:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24417 DF PROTO=TCP SPT=39450 DPT=9100 SEQ=2313255533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAEEFA0000000001030307) Nov 28 04:07:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24418 DF PROTO=TCP SPT=39450 DPT=9100 SEQ=2313255533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABAFEBB0000000001030307) Nov 28 04:07:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6112 DF PROTO=TCP SPT=60306 DPT=9105 SEQ=3988060708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB19F30000000001030307) Nov 28 04:07:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6113 DF PROTO=TCP SPT=60306 DPT=9105 SEQ=3988060708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB1DFB0000000001030307) Nov 28 04:07:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24419 DF PROTO=TCP SPT=39450 DPT=9100 SEQ=2313255533 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB1EFA0000000001030307) Nov 28 04:07:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22552 DF PROTO=TCP SPT=46434 DPT=9882 SEQ=2138006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB2B3A0000000001030307) Nov 28 04:07:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6115 DF PROTO=TCP SPT=60306 DPT=9105 SEQ=3988060708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB35BA0000000001030307) Nov 28 04:07:37 localhost systemd[1]: Starting Check and recover tripleo_nova_virtqemud... Nov 28 04:07:37 localhost recover_tripleo_nova_virtqemud[108101]: 62642 Nov 28 04:07:37 localhost systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully. Nov 28 04:07:37 localhost systemd[1]: Finished Check and recover tripleo_nova_virtqemud. Nov 28 04:07:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65460 DF PROTO=TCP SPT=39772 DPT=9102 SEQ=941991457 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB473A0000000001030307) Nov 28 04:07:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6116 DF PROTO=TCP SPT=60306 DPT=9105 SEQ=3988060708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB56FB0000000001030307) Nov 28 04:07:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29840 DF PROTO=TCP SPT=52336 DPT=9101 SEQ=2319000963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB5AF00000000001030307) Nov 28 04:07:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:07:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:07:44 localhost podman[108102]: 2025-11-28 09:07:44.97040713 +0000 UTC m=+0.078785516 container health_status 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container) Nov 28 04:07:44 localhost podman[108102]: 2025-11-28 09:07:44.988447346 +0000 UTC m=+0.096825732 container exec_died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true) Nov 28 04:07:45 localhost podman[108103]: 2025-11-28 09:07:45.026773066 +0000 UTC m=+0.134283635 container health_status e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container) Nov 28 04:07:45 localhost podman[108102]: unhealthy Nov 28 04:07:45 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:07:45 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed with result 'exit-code'. Nov 28 04:07:45 localhost podman[108103]: 2025-11-28 09:07:45.065680622 +0000 UTC m=+0.173191151 container exec_died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:07:45 localhost podman[108103]: unhealthy Nov 28 04:07:45 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:07:45 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed with result 'exit-code'. Nov 28 04:07:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11948 DF PROTO=TCP SPT=34136 DPT=9100 SEQ=2955239952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB643B0000000001030307) Nov 28 04:07:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11949 DF PROTO=TCP SPT=34136 DPT=9100 SEQ=2955239952 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB73FB0000000001030307) Nov 28 04:07:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35544 DF PROTO=TCP SPT=46700 DPT=9105 SEQ=4195846488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB8F230000000001030307) Nov 28 04:07:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35545 DF PROTO=TCP SPT=46700 DPT=9105 SEQ=4195846488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB933A0000000001030307) Nov 28 04:07:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31862 DF PROTO=TCP SPT=41944 DPT=9882 SEQ=2607740531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB944D0000000001030307) Nov 28 04:08:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14302 DF PROTO=TCP SPT=44144 DPT=9105 SEQ=3700151614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABB9EFA0000000001030307) Nov 28 04:08:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing. Nov 28 04:08:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61828 (conmon) with signal SIGKILL. Nov 28 04:08:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL Nov 28 04:08:02 localhost systemd[1]: libpod-conmon-2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711.scope: Deactivated successfully. Nov 28 04:08:02 localhost systemd[1]: tmp-crun.xt1OnO.mount: Deactivated successfully. Nov 28 04:08:02 localhost podman[108234]: error opening file `/run/crun/2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711/status`: No such file or directory Nov 28 04:08:02 localhost podman[108221]: 2025-11-28 09:08:02.468875289 +0000 UTC m=+0.073675159 container cleanup 2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, vcs-type=git, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:08:02 localhost podman[108221]: nova_virtlogd_wrapper Nov 28 04:08:02 localhost systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'. Nov 28 04:08:02 localhost systemd[1]: Stopped nova_virtlogd_wrapper container. Nov 28 04:08:03 localhost python3.9[108327]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:03 localhost systemd[1]: Reloading. Nov 28 04:08:03 localhost systemd-sysv-generator[108360]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:03 localhost systemd-rc-local-generator[108353]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:03 localhost systemd[1]: Stopping nova_virtnodedevd container... Nov 28 04:08:03 localhost systemd[1]: libpod-490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436.scope: Deactivated successfully. Nov 28 04:08:03 localhost systemd[1]: libpod-490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436.scope: Consumed 1.310s CPU time. Nov 28 04:08:03 localhost podman[108368]: 2025-11-28 09:08:03.707893649 +0000 UTC m=+0.074440722 container died 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step3, distribution-scope=public, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, version=17.1.12, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044) Nov 28 04:08:03 localhost systemd[1]: tmp-crun.s0WZ3V.mount: Deactivated successfully. Nov 28 04:08:03 localhost podman[108368]: 2025-11-28 09:08:03.752091289 +0000 UTC m=+0.118638362 container cleanup 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_virtnodedevd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:08:03 localhost podman[108368]: nova_virtnodedevd Nov 28 04:08:03 localhost podman[108382]: 2025-11-28 09:08:03.792040789 +0000 UTC m=+0.073329358 container cleanup 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.expose-services=, container_name=nova_virtnodedevd, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:08:03 localhost systemd[1]: libpod-conmon-490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436.scope: Deactivated successfully. Nov 28 04:08:03 localhost podman[108411]: error opening file `/run/crun/490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436/status`: No such file or directory Nov 28 04:08:03 localhost podman[108399]: 2025-11-28 09:08:03.897255948 +0000 UTC m=+0.073276836 container cleanup 490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, container_name=nova_virtnodedevd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:08:03 localhost podman[108399]: nova_virtnodedevd Nov 28 04:08:03 localhost systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully. Nov 28 04:08:03 localhost systemd[1]: Stopped nova_virtnodedevd container. Nov 28 04:08:04 localhost python3.9[108504]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:04 localhost systemd[1]: var-lib-containers-storage-overlay-95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0-merged.mount: Deactivated successfully. Nov 28 04:08:04 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-490141dc0beecfbdec2cf756928e0dd5b717de05c10e967326da43f7b52be436-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:04 localhost systemd[1]: Reloading. Nov 28 04:08:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35547 DF PROTO=TCP SPT=46700 DPT=9105 SEQ=4195846488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABBAAFB0000000001030307) Nov 28 04:08:04 localhost systemd-sysv-generator[108536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:04 localhost systemd-rc-local-generator[108531]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:04 localhost systemd[1]: Stopping nova_virtproxyd container... Nov 28 04:08:05 localhost systemd[1]: libpod-7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657.scope: Deactivated successfully. Nov 28 04:08:05 localhost podman[108545]: 2025-11-28 09:08:05.033205995 +0000 UTC m=+0.057024656 container died 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtproxyd, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team) Nov 28 04:08:05 localhost podman[108545]: 2025-11-28 09:08:05.069023848 +0000 UTC m=+0.092842439 container cleanup 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_virtproxyd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt) Nov 28 04:08:05 localhost podman[108545]: nova_virtproxyd Nov 28 04:08:05 localhost podman[108560]: 2025-11-28 09:08:05.111774523 +0000 UTC m=+0.063487354 container cleanup 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, io.buildah.version=1.41.4, config_id=tripleo_step3, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1) Nov 28 04:08:05 localhost systemd[1]: libpod-conmon-7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657.scope: Deactivated successfully. Nov 28 04:08:05 localhost podman[108586]: error opening file `/run/crun/7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657/status`: No such file or directory Nov 28 04:08:05 localhost podman[108575]: 2025-11-28 09:08:05.20912505 +0000 UTC m=+0.069946254 container cleanup 7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtproxyd, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc.) Nov 28 04:08:05 localhost podman[108575]: nova_virtproxyd Nov 28 04:08:05 localhost systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully. Nov 28 04:08:05 localhost systemd[1]: Stopped nova_virtproxyd container. Nov 28 04:08:05 localhost systemd[1]: var-lib-containers-storage-overlay-0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8-merged.mount: Deactivated successfully. Nov 28 04:08:05 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f1efe5480b4850e72969a411413c723808a2e9f2a72da0ab9b5bc407d874657-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:05 localhost python3.9[108679]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:06 localhost systemd[1]: Reloading. Nov 28 04:08:06 localhost systemd-rc-local-generator[108702]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:06 localhost systemd-sysv-generator[108707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:06 localhost systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully. Nov 28 04:08:06 localhost systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m. Nov 28 04:08:06 localhost systemd[1]: Stopping nova_virtqemud container... Nov 28 04:08:06 localhost systemd[1]: libpod-929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432.scope: Deactivated successfully. Nov 28 04:08:06 localhost systemd[1]: libpod-929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432.scope: Consumed 1.944s CPU time. Nov 28 04:08:06 localhost podman[108720]: 2025-11-28 09:08:06.438231666 +0000 UTC m=+0.082306435 container died 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, release=1761123044, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4) Nov 28 04:08:06 localhost podman[108720]: 2025-11-28 09:08:06.472724317 +0000 UTC m=+0.116799086 container cleanup 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, container_name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:08:06 localhost podman[108720]: nova_virtqemud Nov 28 04:08:06 localhost podman[108734]: 2025-11-28 09:08:06.524409358 +0000 UTC m=+0.070308945 container cleanup 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true) Nov 28 04:08:06 localhost systemd[1]: libpod-conmon-929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432.scope: Deactivated successfully. Nov 28 04:08:06 localhost podman[108764]: error opening file `/run/crun/929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432/status`: No such file or directory Nov 28 04:08:06 localhost podman[108751]: 2025-11-28 09:08:06.635185958 +0000 UTC m=+0.069382997 container cleanup 929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, release=1761123044, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtqemud, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:08:06 localhost podman[108751]: nova_virtqemud Nov 28 04:08:06 localhost systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully. Nov 28 04:08:06 localhost systemd[1]: Stopped nova_virtqemud container. Nov 28 04:08:06 localhost systemd[1]: var-lib-containers-storage-overlay-0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073-merged.mount: Deactivated successfully. Nov 28 04:08:06 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:07 localhost python3.9[108857]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:07 localhost systemd[1]: Reloading. Nov 28 04:08:07 localhost systemd-rc-local-generator[108883]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:07 localhost systemd-sysv-generator[108889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:07 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:08 localhost python3.9[108986]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20004 DF PROTO=TCP SPT=53812 DPT=9102 SEQ=1366834653 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABBBC7A0000000001030307) Nov 28 04:08:09 localhost systemd[1]: Reloading. Nov 28 04:08:09 localhost systemd-sysv-generator[109016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:09 localhost systemd-rc-local-generator[109013]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:10 localhost systemd[1]: Stopping nova_virtsecretd container... Nov 28 04:08:10 localhost systemd[1]: libpod-c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808.scope: Deactivated successfully. Nov 28 04:08:10 localhost podman[109027]: 2025-11-28 09:08:10.111133277 +0000 UTC m=+0.077145786 container died c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtsecretd, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, batch=17.1_20251118.1) Nov 28 04:08:10 localhost podman[109027]: 2025-11-28 09:08:10.147055103 +0000 UTC m=+0.113067562 container cleanup c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, container_name=nova_virtsecretd, release=1761123044, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step3) Nov 28 04:08:10 localhost podman[109027]: nova_virtsecretd Nov 28 04:08:10 localhost podman[109041]: 2025-11-28 09:08:10.193929305 +0000 UTC m=+0.069683375 container cleanup c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtsecretd, maintainer=OpenStack TripleO Team) Nov 28 04:08:10 localhost systemd[1]: libpod-conmon-c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808.scope: Deactivated successfully. Nov 28 04:08:10 localhost podman[109069]: error opening file `/run/crun/c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808/status`: No such file or directory Nov 28 04:08:10 localhost podman[109058]: 2025-11-28 09:08:10.297684499 +0000 UTC m=+0.073157343 container cleanup c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, batch=17.1_20251118.1, container_name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:08:10 localhost podman[109058]: nova_virtsecretd Nov 28 04:08:10 localhost systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully. Nov 28 04:08:10 localhost systemd[1]: Stopped nova_virtsecretd container. Nov 28 04:08:11 localhost python3.9[109163]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:11 localhost systemd[1]: var-lib-containers-storage-overlay-897f35829b1f881949b1c333f7f4948d19933191339ff7279e3c8582c9dcbd21-merged.mount: Deactivated successfully. Nov 28 04:08:11 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c819cf470c2869c75c471bbedd276e4a2f4c93050051a8f401cabeeedb4a8808-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:11 localhost systemd[1]: Reloading. Nov 28 04:08:11 localhost systemd-rc-local-generator[109187]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:11 localhost systemd-sysv-generator[109193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:11 localhost systemd[1]: Stopping nova_virtstoraged container... Nov 28 04:08:11 localhost systemd[1]: libpod-77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0.scope: Deactivated successfully. Nov 28 04:08:11 localhost podman[109203]: 2025-11-28 09:08:11.55392514 +0000 UTC m=+0.083742939 container died 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:08:11 localhost podman[109203]: 2025-11-28 09:08:11.59424306 +0000 UTC m=+0.124060789 container cleanup 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, container_name=nova_virtstoraged, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1) Nov 28 04:08:11 localhost podman[109203]: nova_virtstoraged Nov 28 04:08:11 localhost podman[109218]: 2025-11-28 09:08:11.638698879 +0000 UTC m=+0.072571935 container cleanup 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtstoraged, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, release=1761123044, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream) Nov 28 04:08:11 localhost systemd[1]: libpod-conmon-77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0.scope: Deactivated successfully. Nov 28 04:08:11 localhost podman[109245]: error opening file `/run/crun/77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0/status`: No such file or directory Nov 28 04:08:11 localhost podman[109234]: 2025-11-28 09:08:11.73811812 +0000 UTC m=+0.067214840 container cleanup 77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'bbb5ea37891e3118676a78b59837de90'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, release=1761123044, vcs-type=git, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team) Nov 28 04:08:11 localhost podman[109234]: nova_virtstoraged Nov 28 04:08:11 localhost systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully. Nov 28 04:08:11 localhost systemd[1]: Stopped nova_virtstoraged container. Nov 28 04:08:12 localhost systemd[1]: var-lib-containers-storage-overlay-c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a-merged.mount: Deactivated successfully. Nov 28 04:08:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77e9d72df8f79ee50de7116306a3a6d3da17ccdfda2a4c48233804c3562cc2f0-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:12 localhost python3.9[109340]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:12 localhost systemd[1]: Reloading. Nov 28 04:08:12 localhost systemd-rc-local-generator[109367]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:12 localhost systemd-sysv-generator[109370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35548 DF PROTO=TCP SPT=46700 DPT=9105 SEQ=4195846488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABBCAFA0000000001030307) Nov 28 04:08:12 localhost systemd[1]: Stopping ovn_controller container... Nov 28 04:08:12 localhost systemd[1]: tmp-crun.teiCMy.mount: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: libpod-9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.scope: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: libpod-9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.scope: Consumed 2.490s CPU time. Nov 28 04:08:13 localhost podman[109381]: 2025-11-28 09:08:13.009590378 +0000 UTC m=+0.084236154 container died 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com) Nov 28 04:08:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.timer: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164. Nov 28 04:08:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed to open /run/systemd/transient/9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: No such file or directory Nov 28 04:08:13 localhost systemd[1]: tmp-crun.qR2apN.mount: Deactivated successfully. Nov 28 04:08:13 localhost podman[109381]: 2025-11-28 09:08:13.057277597 +0000 UTC m=+0.131923343 container cleanup 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}) Nov 28 04:08:13 localhost podman[109381]: ovn_controller Nov 28 04:08:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.timer: Failed to open /run/systemd/transient/9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.timer: No such file or directory Nov 28 04:08:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed to open /run/systemd/transient/9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: No such file or directory Nov 28 04:08:13 localhost podman[109393]: 2025-11-28 09:08:13.095009598 +0000 UTC m=+0.073956967 container cleanup 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_controller, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05) Nov 28 04:08:13 localhost systemd[1]: var-lib-containers-storage-overlay-c6bc8e2b5666799e64c84f093eb3569ddc3bccd8602a09788ea75d9b81e61916-merged.mount: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: libpod-conmon-9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.scope: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.timer: Failed to open /run/systemd/transient/9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.timer: No such file or directory Nov 28 04:08:13 localhost systemd[1]: 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: Failed to open /run/systemd/transient/9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164.service: No such file or directory Nov 28 04:08:13 localhost podman[109409]: 2025-11-28 09:08:13.207539032 +0000 UTC m=+0.077116694 container cleanup 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, release=1761123044, container_name=ovn_controller, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:08:13 localhost podman[109409]: ovn_controller Nov 28 04:08:13 localhost systemd[1]: tripleo_ovn_controller.service: Deactivated successfully. Nov 28 04:08:13 localhost systemd[1]: Stopped ovn_controller container. Nov 28 04:08:13 localhost python3.9[109511]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:14 localhost systemd[1]: Reloading. Nov 28 04:08:14 localhost systemd-sysv-generator[109539]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:14 localhost systemd-rc-local-generator[109536]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6287 DF PROTO=TCP SPT=42114 DPT=9101 SEQ=1254892130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABBD0200000000001030307) Nov 28 04:08:14 localhost systemd[1]: Stopping ovn_metadata_agent container... Nov 28 04:08:14 localhost systemd[1]: tmp-crun.UZgQuR.mount: Deactivated successfully. Nov 28 04:08:14 localhost systemd[1]: libpod-e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.scope: Deactivated successfully. Nov 28 04:08:14 localhost systemd[1]: libpod-e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.scope: Consumed 9.033s CPU time. Nov 28 04:08:14 localhost podman[109552]: 2025-11-28 09:08:14.570497178 +0000 UTC m=+0.200048220 container died e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn) Nov 28 04:08:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.timer: Deactivated successfully. Nov 28 04:08:14 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47. Nov 28 04:08:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed to open /run/systemd/transient/e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: No such file or directory Nov 28 04:08:14 localhost podman[109552]: 2025-11-28 09:08:14.634502838 +0000 UTC m=+0.264053860 container cleanup e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}) Nov 28 04:08:14 localhost podman[109552]: ovn_metadata_agent Nov 28 04:08:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.timer: Failed to open /run/systemd/transient/e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.timer: No such file or directory Nov 28 04:08:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed to open /run/systemd/transient/e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: No such file or directory Nov 28 04:08:14 localhost podman[109565]: 2025-11-28 09:08:14.70702093 +0000 UTC m=+0.125769702 container cleanup e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64) Nov 28 04:08:14 localhost systemd[1]: libpod-conmon-e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.scope: Deactivated successfully. Nov 28 04:08:14 localhost podman[109595]: error opening file `/run/crun/e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47/status`: No such file or directory Nov 28 04:08:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.timer: Failed to open /run/systemd/transient/e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.timer: No such file or directory Nov 28 04:08:14 localhost systemd[1]: e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: Failed to open /run/systemd/transient/e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47.service: No such file or directory Nov 28 04:08:14 localhost podman[109581]: 2025-11-28 09:08:14.804620364 +0000 UTC m=+0.066228739 container cleanup e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn) Nov 28 04:08:14 localhost podman[109581]: ovn_metadata_agent Nov 28 04:08:14 localhost systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully. Nov 28 04:08:14 localhost systemd[1]: Stopped ovn_metadata_agent container. Nov 28 04:08:15 localhost systemd[1]: var-lib-containers-storage-overlay-22314ee7dcc5723035b6772f98d17adedfb1f7b03c71f0801082e550913dd450-merged.mount: Deactivated successfully. Nov 28 04:08:15 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47-userdata-shm.mount: Deactivated successfully. Nov 28 04:08:15 localhost python3.9[109688]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:08:15 localhost systemd[1]: Reloading. Nov 28 04:08:15 localhost systemd-sysv-generator[109720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:08:15 localhost systemd-rc-local-generator[109717]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:08:15 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:08:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44402 DF PROTO=TCP SPT=41132 DPT=9100 SEQ=3174964367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABBD97A0000000001030307) Nov 28 04:08:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44403 DF PROTO=TCP SPT=41132 DPT=9100 SEQ=3174964367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABBE93A0000000001030307) Nov 28 04:08:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13782 DF PROTO=TCP SPT=55652 DPT=9105 SEQ=883028529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC04530000000001030307) Nov 28 04:08:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13783 DF PROTO=TCP SPT=55652 DPT=9105 SEQ=883028529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC087A0000000001030307) Nov 28 04:08:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44404 DF PROTO=TCP SPT=41132 DPT=9100 SEQ=3174964367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC08FA0000000001030307) Nov 28 04:08:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6118 DF PROTO=TCP SPT=60306 DPT=9105 SEQ=3988060708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC14FA0000000001030307) Nov 28 04:08:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:08:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4784 writes, 637 syncs, 7.51 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:08:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13785 DF PROTO=TCP SPT=55652 DPT=9105 SEQ=883028529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC203A0000000001030307) Nov 28 04:08:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:08:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 5781 writes, 25K keys, 5781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5781 writes, 729 syncs, 7.93 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:08:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57015 DF PROTO=TCP SPT=40258 DPT=9102 SEQ=523244208 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC317A0000000001030307) Nov 28 04:08:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13786 DF PROTO=TCP SPT=55652 DPT=9105 SEQ=883028529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC40FA0000000001030307) Nov 28 04:08:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50403 DF PROTO=TCP SPT=52180 DPT=9882 SEQ=1087689752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC44FA0000000001030307) Nov 28 04:08:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33203 DF PROTO=TCP SPT=46266 DPT=9100 SEQ=3793430538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC4E7A0000000001030307) Nov 28 04:08:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33204 DF PROTO=TCP SPT=46266 DPT=9100 SEQ=3793430538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC5E3A0000000001030307) Nov 28 04:08:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42554 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2266543500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC79840000000001030307) Nov 28 04:08:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42555 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2266543500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC7D7A0000000001030307) Nov 28 04:08:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9945 DF PROTO=TCP SPT=55470 DPT=9882 SEQ=1427205240 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC7EAC0000000001030307) Nov 28 04:09:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35550 DF PROTO=TCP SPT=46700 DPT=9105 SEQ=4195846488 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC88FA0000000001030307) Nov 28 04:09:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42557 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2266543500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABC953A0000000001030307) Nov 28 04:09:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2686 DF PROTO=TCP SPT=45890 DPT=9102 SEQ=3024629957 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCA6BA0000000001030307) Nov 28 04:09:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42558 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2266543500 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCB4FA0000000001030307) Nov 28 04:09:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48602 DF PROTO=TCP SPT=43578 DPT=9101 SEQ=2593526270 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCBA800000000001030307) Nov 28 04:09:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48963 DF PROTO=TCP SPT=52360 DPT=9100 SEQ=2930900476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCC3BA0000000001030307) Nov 28 04:09:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48964 DF PROTO=TCP SPT=52360 DPT=9100 SEQ=2930900476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCD37A0000000001030307) Nov 28 04:09:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38153 DF PROTO=TCP SPT=37750 DPT=9105 SEQ=465103086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCEEB30000000001030307) Nov 28 04:09:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38154 DF PROTO=TCP SPT=37750 DPT=9105 SEQ=465103086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCF2BA0000000001030307) Nov 28 04:09:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48965 DF PROTO=TCP SPT=52360 DPT=9100 SEQ=2930900476 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCF2FA0000000001030307) Nov 28 04:09:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13788 DF PROTO=TCP SPT=55652 DPT=9105 SEQ=883028529 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABCFEFA0000000001030307) Nov 28 04:09:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38156 DF PROTO=TCP SPT=37750 DPT=9105 SEQ=465103086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD0A7A0000000001030307) Nov 28 04:09:36 localhost systemd[1]: session-36.scope: Deactivated successfully. Nov 28 04:09:36 localhost systemd[1]: session-36.scope: Consumed 19.462s CPU time. Nov 28 04:09:36 localhost systemd-logind[763]: Session 36 logged out. Waiting for processes to exit. Nov 28 04:09:36 localhost systemd-logind[763]: Removed session 36. Nov 28 04:09:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52547 DF PROTO=TCP SPT=55104 DPT=9102 SEQ=2348563081 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD1BFA0000000001030307) Nov 28 04:09:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38157 DF PROTO=TCP SPT=37750 DPT=9105 SEQ=465103086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD2AFA0000000001030307) Nov 28 04:09:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61712 DF PROTO=TCP SPT=35592 DPT=9101 SEQ=999991009 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD2FB00000000001030307) Nov 28 04:09:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27524 DF PROTO=TCP SPT=35812 DPT=9100 SEQ=629490546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD38FA0000000001030307) Nov 28 04:09:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27525 DF PROTO=TCP SPT=35812 DPT=9100 SEQ=629490546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD48BA0000000001030307) Nov 28 04:09:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19934 DF PROTO=TCP SPT=53582 DPT=9105 SEQ=2412963223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD63E30000000001030307) Nov 28 04:09:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19935 DF PROTO=TCP SPT=53582 DPT=9105 SEQ=2412963223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD67FA0000000001030307) Nov 28 04:09:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27526 DF PROTO=TCP SPT=35812 DPT=9100 SEQ=629490546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD68FA0000000001030307) Nov 28 04:10:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13119 DF PROTO=TCP SPT=51294 DPT=9882 SEQ=686327318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD74FA0000000001030307) Nov 28 04:10:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19937 DF PROTO=TCP SPT=53582 DPT=9105 SEQ=2412963223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD7FBA0000000001030307) Nov 28 04:10:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45504 DF PROTO=TCP SPT=39436 DPT=9102 SEQ=1530707447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABD913B0000000001030307) Nov 28 04:10:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19938 DF PROTO=TCP SPT=53582 DPT=9105 SEQ=2412963223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDA0FB0000000001030307) Nov 28 04:10:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38641 DF PROTO=TCP SPT=33346 DPT=9101 SEQ=4112440622 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDA4E10000000001030307) Nov 28 04:10:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4648 DF PROTO=TCP SPT=60218 DPT=9100 SEQ=1035888636 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDAE3A0000000001030307) Nov 28 04:10:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4649 DF PROTO=TCP SPT=60218 DPT=9100 SEQ=1035888636 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDBDFA0000000001030307) Nov 28 04:10:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46143 DF PROTO=TCP SPT=48230 DPT=9105 SEQ=1998963562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDD9130000000001030307) Nov 28 04:10:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46144 DF PROTO=TCP SPT=48230 DPT=9105 SEQ=1998963562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDDD3A0000000001030307) Nov 28 04:10:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64749 DF PROTO=TCP SPT=49020 DPT=9882 SEQ=1698743421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDDE3C0000000001030307) Nov 28 04:10:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38159 DF PROTO=TCP SPT=37750 DPT=9105 SEQ=465103086 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDE8FA0000000001030307) Nov 28 04:10:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46146 DF PROTO=TCP SPT=48230 DPT=9105 SEQ=1998963562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABDF4FB0000000001030307) Nov 28 04:10:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61708 DF PROTO=TCP SPT=47744 DPT=9102 SEQ=4236463055 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE063A0000000001030307) Nov 28 04:10:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46147 DF PROTO=TCP SPT=48230 DPT=9105 SEQ=1998963562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE14FA0000000001030307) Nov 28 04:10:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39815 DF PROTO=TCP SPT=34776 DPT=9101 SEQ=3358262386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE1A100000000001030307) Nov 28 04:10:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2441 DF PROTO=TCP SPT=37152 DPT=9100 SEQ=1364534858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE233A0000000001030307) Nov 28 04:10:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2442 DF PROTO=TCP SPT=37152 DPT=9100 SEQ=1364534858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE32FA0000000001030307) Nov 28 04:10:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=617 DF PROTO=TCP SPT=45096 DPT=9105 SEQ=2771617679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE4E420000000001030307) Nov 28 04:10:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=618 DF PROTO=TCP SPT=45096 DPT=9105 SEQ=2771617679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE523B0000000001030307) Nov 28 04:10:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2443 DF PROTO=TCP SPT=37152 DPT=9100 SEQ=1364534858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE52FA0000000001030307) Nov 28 04:11:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19940 DF PROTO=TCP SPT=53582 DPT=9105 SEQ=2412963223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE5EFA0000000001030307) Nov 28 04:11:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=620 DF PROTO=TCP SPT=45096 DPT=9105 SEQ=2771617679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE69FA0000000001030307) Nov 28 04:11:06 localhost sshd[110024]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:11:06 localhost systemd-logind[763]: New session 37 of user zuul. Nov 28 04:11:06 localhost systemd[1]: Started Session 37 of User zuul. Nov 28 04:11:07 localhost python3.9[110105]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:07 localhost python3.9[110197]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:08 localhost python3.9[110289]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2944 DF PROTO=TCP SPT=46602 DPT=9102 SEQ=287814670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE7B7A0000000001030307) Nov 28 04:11:09 localhost python3.9[110381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:09 localhost python3.9[110473]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:10 localhost python3.9[110565]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:11 localhost python3.9[110657]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:11 localhost python3.9[110749]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:12 localhost python3.9[110841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:12 localhost python3.9[110933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=621 DF PROTO=TCP SPT=45096 DPT=9105 SEQ=2771617679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE8AFB0000000001030307) Nov 28 04:11:13 localhost python3.9[111025]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:13 localhost python3.9[111117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55985 DF PROTO=TCP SPT=51270 DPT=9882 SEQ=2418090807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE8EFB0000000001030307) Nov 28 04:11:14 localhost python3.9[111209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:15 localhost python3.9[111301]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:15 localhost python3.9[111393]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:16 localhost python3.9[111485]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15889 DF PROTO=TCP SPT=56990 DPT=9100 SEQ=2037800138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABE987B0000000001030307) Nov 28 04:11:16 localhost python3.9[111577]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:17 localhost python3.9[111669]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:18 localhost python3.9[111761]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:18 localhost python3.9[111853]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:19 localhost python3.9[111945]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15890 DF PROTO=TCP SPT=56990 DPT=9100 SEQ=2037800138 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEA83A0000000001030307) Nov 28 04:11:20 localhost python3.9[112037]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:21 localhost python3.9[112129]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:21 localhost python3.9[112221]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:22 localhost python3.9[112313]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:22 localhost python3.9[112405]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:23 localhost python3.9[112497]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:24 localhost python3.9[112589]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:25 localhost python3.9[112681]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:25 localhost python3.9[112773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:26 localhost python3.9[112865]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:26 localhost python3.9[112957]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:27 localhost python3.9[113049]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30540 DF PROTO=TCP SPT=40126 DPT=9105 SEQ=4094490606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEC3730000000001030307) Nov 28 04:11:28 localhost python3.9[113141]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30541 DF PROTO=TCP SPT=40126 DPT=9105 SEQ=4094490606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEC77B0000000001030307) Nov 28 04:11:28 localhost python3.9[113233]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13890 DF PROTO=TCP SPT=37348 DPT=9882 SEQ=2422586960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEC89E0000000001030307) Nov 28 04:11:29 localhost python3.9[113325]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:29 localhost python3.9[113417]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:30 localhost python3.9[113509]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:31 localhost python3.9[113601]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46149 DF PROTO=TCP SPT=48230 DPT=9105 SEQ=1998963562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABED2FA0000000001030307) Nov 28 04:11:31 localhost python3.9[113693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:32 localhost python3.9[113785]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:32 localhost python3.9[113877]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:11:34 localhost python3.9[113969]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30543 DF PROTO=TCP SPT=40126 DPT=9105 SEQ=4094490606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEDF3A0000000001030307) Nov 28 04:11:35 localhost python3.9[114061]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:11:35 localhost python3.9[114153]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:11:35 localhost systemd[1]: Reloading. Nov 28 04:11:36 localhost systemd-rc-local-generator[114180]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:11:36 localhost systemd-sysv-generator[114184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:11:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:11:36 localhost python3.9[114281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:38 localhost python3.9[114374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42420 DF PROTO=TCP SPT=51322 DPT=9102 SEQ=1200137031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEF0BA0000000001030307) Nov 28 04:11:39 localhost python3.9[114467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:39 localhost python3.9[114560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:40 localhost python3.9[114653]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:41 localhost python3.9[114746]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:41 localhost python3.9[114839]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:42 localhost python3.9[114932]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:42 localhost python3.9[115025]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30544 DF PROTO=TCP SPT=40126 DPT=9105 SEQ=4094490606 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABEFEFA0000000001030307) Nov 28 04:11:43 localhost python3.9[115118]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20902 DF PROTO=TCP SPT=43364 DPT=9101 SEQ=2399263955 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF04700000000001030307) Nov 28 04:11:44 localhost python3.9[115211]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:45 localhost python3.9[115304]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:46 localhost python3.9[115397]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31179 DF PROTO=TCP SPT=47726 DPT=9100 SEQ=3059007427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF0DBA0000000001030307) Nov 28 04:11:47 localhost python3.9[115490]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:47 localhost python3.9[115583]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:48 localhost python3.9[115676]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:49 localhost python3.9[115769]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:49 localhost python3.9[115862]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:50 localhost python3.9[115955]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31180 DF PROTO=TCP SPT=47726 DPT=9100 SEQ=3059007427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF1D7A0000000001030307) Nov 28 04:11:50 localhost python3.9[116048]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:52 localhost python3.9[116141]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:11:53 localhost systemd-logind[763]: Session 37 logged out. Waiting for processes to exit. Nov 28 04:11:53 localhost systemd[1]: session-37.scope: Deactivated successfully. Nov 28 04:11:53 localhost systemd[1]: session-37.scope: Consumed 30.647s CPU time. Nov 28 04:11:53 localhost systemd-logind[763]: Removed session 37. Nov 28 04:11:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37706 DF PROTO=TCP SPT=51020 DPT=9105 SEQ=839970434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF38A40000000001030307) Nov 28 04:11:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37707 DF PROTO=TCP SPT=51020 DPT=9105 SEQ=839970434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF3CBA0000000001030307) Nov 28 04:11:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31181 DF PROTO=TCP SPT=47726 DPT=9100 SEQ=3059007427 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF3CFA0000000001030307) Nov 28 04:12:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=623 DF PROTO=TCP SPT=45096 DPT=9105 SEQ=2771617679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF48FA0000000001030307) Nov 28 04:12:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37709 DF PROTO=TCP SPT=51020 DPT=9105 SEQ=839970434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF547A0000000001030307) Nov 28 04:12:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42414 DF PROTO=TCP SPT=37072 DPT=9102 SEQ=924624839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF65FA0000000001030307) Nov 28 04:12:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37710 DF PROTO=TCP SPT=51020 DPT=9105 SEQ=839970434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF74FA0000000001030307) Nov 28 04:12:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22580 DF PROTO=TCP SPT=39386 DPT=9882 SEQ=1173008762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF78FB0000000001030307) Nov 28 04:12:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27784 DF PROTO=TCP SPT=53182 DPT=9100 SEQ=3797941213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF82FB0000000001030307) Nov 28 04:12:20 localhost sshd[116235]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:12:20 localhost systemd-logind[763]: New session 38 of user zuul. Nov 28 04:12:20 localhost systemd[1]: Started Session 38 of User zuul. Nov 28 04:12:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27785 DF PROTO=TCP SPT=53182 DPT=9100 SEQ=3797941213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABF92BB0000000001030307) Nov 28 04:12:21 localhost python3.9[116328]: ansible-ansible.legacy.ping Invoked with data=pong Nov 28 04:12:22 localhost python3.9[116432]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:12:23 localhost python3.9[116524]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:12:24 localhost python3.9[116617]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:12:25 localhost python3.9[116709]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:12:25 localhost python3.9[116801]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:12:26 localhost python3.9[116874]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321145.4753542-179-4419938152560/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:12:27 localhost python3.9[116966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:12:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64497 DF PROTO=TCP SPT=60080 DPT=9105 SEQ=380828836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFADD30000000001030307) Nov 28 04:12:28 localhost python3.9[117062]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:12:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64498 DF PROTO=TCP SPT=60080 DPT=9105 SEQ=380828836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFB1FB0000000001030307) Nov 28 04:12:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27786 DF PROTO=TCP SPT=53182 DPT=9100 SEQ=3797941213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFB2FA0000000001030307) Nov 28 04:12:29 localhost python3.9[117154]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:12:29 localhost python3.9[117244]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:12:30 localhost network[117261]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:12:30 localhost network[117262]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:12:30 localhost network[117263]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:12:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:12:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52068 DF PROTO=TCP SPT=55116 DPT=9882 SEQ=3541698670 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFBEFA0000000001030307) Nov 28 04:12:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64500 DF PROTO=TCP SPT=60080 DPT=9105 SEQ=380828836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFC9BA0000000001030307) Nov 28 04:12:35 localhost python3.9[117461]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:12:36 localhost python3.9[117551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:12:37 localhost python3.9[117647]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream#012set -euxo pipefail#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main#012# This is required for FIPS enabled until trunk.rdoproject.org#012# is not being served from a centos7 host, tracked by#012# https://issues.redhat.com/browse/RHOSZUUL-1517#012dnf -y install crypto-policies#012update-crypto-policies --set FIPS:NO-ENFORCE-EMS#012./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream#012#012# Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible#012# with rhel 9.2 openssh#012dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save#012# FIXME: perform dnf upgrade for other packages in EDPM ansible#012# here we only ensuring that decontainerized libvirt can start#012dnf -y upgrade openstack-selinux#012rm -f /run/virtlogd.pid#012#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:12:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8488 DF PROTO=TCP SPT=33026 DPT=9102 SEQ=1824906303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFDAFB0000000001030307) Nov 28 04:12:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64501 DF PROTO=TCP SPT=60080 DPT=9105 SEQ=380828836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFEAFA0000000001030307) Nov 28 04:12:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19794 DF PROTO=TCP SPT=54068 DPT=9101 SEQ=250308116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFEED00000000001030307) Nov 28 04:12:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17951 DF PROTO=TCP SPT=37498 DPT=9100 SEQ=4152429093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ABFF7FB0000000001030307) Nov 28 04:12:46 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 28 04:12:46 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 28 04:12:46 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 28 04:12:46 localhost systemd[1]: sshd.service: Consumed 1.009s CPU time. Nov 28 04:12:46 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 28 04:12:46 localhost systemd[1]: Stopping sshd-keygen.target... Nov 28 04:12:46 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:12:46 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:12:46 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:12:46 localhost systemd[1]: Reached target sshd-keygen.target. Nov 28 04:12:46 localhost systemd[1]: Starting OpenSSH server daemon... Nov 28 04:12:46 localhost sshd[117690]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:12:46 localhost systemd[1]: Started OpenSSH server daemon. Nov 28 04:12:46 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 04:12:46 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 04:12:46 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 04:12:47 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 04:12:47 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 04:12:47 localhost systemd[1]: run-r371e36430ddb4353bd579e3a3681593a.service: Deactivated successfully. Nov 28 04:12:47 localhost systemd[1]: run-rbbae77d1706e4800851f165e959e8902.service: Deactivated successfully. Nov 28 04:12:47 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 28 04:12:47 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 28 04:12:47 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 28 04:12:47 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 28 04:12:47 localhost systemd[1]: Stopping sshd-keygen.target... Nov 28 04:12:47 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:12:47 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:12:47 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:12:47 localhost systemd[1]: Reached target sshd-keygen.target. Nov 28 04:12:47 localhost systemd[1]: Starting OpenSSH server daemon... Nov 28 04:12:47 localhost sshd[117864]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:12:47 localhost systemd[1]: Started OpenSSH server daemon. Nov 28 04:12:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17952 DF PROTO=TCP SPT=37498 DPT=9100 SEQ=4152429093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC007BA0000000001030307) Nov 28 04:12:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27943 DF PROTO=TCP SPT=60562 DPT=9105 SEQ=1404972871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC023030000000001030307) Nov 28 04:12:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27944 DF PROTO=TCP SPT=60562 DPT=9105 SEQ=1404972871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC026FB0000000001030307) Nov 28 04:12:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33323 DF PROTO=TCP SPT=48168 DPT=9882 SEQ=1107686790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0282C0000000001030307) Nov 28 04:13:00 localhost systemd[1]: tmp-crun.y4oLqG.mount: Deactivated successfully. Nov 28 04:13:00 localhost podman[118062]: 2025-11-28 09:13:00.4118648 +0000 UTC m=+0.102088303 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55) Nov 28 04:13:00 localhost podman[118062]: 2025-11-28 09:13:00.511710344 +0000 UTC m=+0.201933797 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_CLEAN=True, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55) Nov 28 04:13:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37712 DF PROTO=TCP SPT=51020 DPT=9105 SEQ=839970434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC032FA0000000001030307) Nov 28 04:13:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27946 DF PROTO=TCP SPT=60562 DPT=9105 SEQ=1404972871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC03EBA0000000001030307) Nov 28 04:13:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22964 DF PROTO=TCP SPT=53526 DPT=9102 SEQ=1115222816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0503A0000000001030307) Nov 28 04:13:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27947 DF PROTO=TCP SPT=60562 DPT=9105 SEQ=1404972871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC05EFA0000000001030307) Nov 28 04:13:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30793 DF PROTO=TCP SPT=37358 DPT=9101 SEQ=481832130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC064030000000001030307) Nov 28 04:13:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20348 DF PROTO=TCP SPT=57078 DPT=9100 SEQ=3399415307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC06D3A0000000001030307) Nov 28 04:13:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20349 DF PROTO=TCP SPT=57078 DPT=9100 SEQ=3399415307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC07CFA0000000001030307) Nov 28 04:13:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19517 DF PROTO=TCP SPT=52600 DPT=9105 SEQ=3779108153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC098330000000001030307) Nov 28 04:13:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19518 DF PROTO=TCP SPT=52600 DPT=9105 SEQ=3779108153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC09C3A0000000001030307) Nov 28 04:13:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20350 DF PROTO=TCP SPT=57078 DPT=9100 SEQ=3399415307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC09CFA0000000001030307) Nov 28 04:13:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64503 DF PROTO=TCP SPT=60080 DPT=9105 SEQ=380828836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0A8FA0000000001030307) Nov 28 04:13:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19520 DF PROTO=TCP SPT=52600 DPT=9105 SEQ=3779108153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0B3FA0000000001030307) Nov 28 04:13:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9449 DF PROTO=TCP SPT=43326 DPT=9102 SEQ=149057482 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0C57A0000000001030307) Nov 28 04:13:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19521 DF PROTO=TCP SPT=52600 DPT=9105 SEQ=3779108153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0D4FA0000000001030307) Nov 28 04:13:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36162 DF PROTO=TCP SPT=37542 DPT=9882 SEQ=1271581261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0D8FA0000000001030307) Nov 28 04:13:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37447 DF PROTO=TCP SPT=46520 DPT=9100 SEQ=498432609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0E27A0000000001030307) Nov 28 04:13:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37448 DF PROTO=TCP SPT=46520 DPT=9100 SEQ=498432609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC0F23A0000000001030307) Nov 28 04:13:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60617 DF PROTO=TCP SPT=39838 DPT=9105 SEQ=3790350889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC10D660000000001030307) Nov 28 04:13:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60618 DF PROTO=TCP SPT=39838 DPT=9105 SEQ=3790350889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1117A0000000001030307) Nov 28 04:13:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22409 DF PROTO=TCP SPT=38130 DPT=9882 SEQ=1707402887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1128C0000000001030307) Nov 28 04:13:59 localhost kernel: SELinux: Converting 2741 SID table entries... Nov 28 04:13:59 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:13:59 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:13:59 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:13:59 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:13:59 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:13:59 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:13:59 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:14:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27949 DF PROTO=TCP SPT=60562 DPT=9105 SEQ=1404972871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC11CFA0000000001030307) Nov 28 04:14:02 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=17 res=1 Nov 28 04:14:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60620 DF PROTO=TCP SPT=39838 DPT=9105 SEQ=3790350889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1293A0000000001030307) Nov 28 04:14:06 localhost python3.9[118760]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:14:07 localhost python3.9[118852]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:14:07 localhost python3.9[118925]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321246.6766562-428-99739598710650/.source.fact _original_basename=.wzwoes5n follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:14:08 localhost python3.9[119015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:14:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7621 DF PROTO=TCP SPT=52616 DPT=9102 SEQ=131165713 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC13ABA0000000001030307) Nov 28 04:14:10 localhost python3.9[119113]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:14:10 localhost python3.9[119167]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:14:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60621 DF PROTO=TCP SPT=39838 DPT=9105 SEQ=3790350889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC148FA0000000001030307) Nov 28 04:14:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39940 DF PROTO=TCP SPT=36954 DPT=9101 SEQ=54121519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC14E620000000001030307) Nov 28 04:14:14 localhost systemd[1]: Reloading. Nov 28 04:14:14 localhost systemd-rc-local-generator[119201]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:14:14 localhost systemd-sysv-generator[119206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:14:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:14:14 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 04:14:16 localhost python3.9[119307]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:14:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22687 DF PROTO=TCP SPT=34372 DPT=9100 SEQ=311103029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC157BB0000000001030307) Nov 28 04:14:18 localhost python3.9[119546]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False Nov 28 04:14:19 localhost python3.9[119638]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None Nov 28 04:14:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22688 DF PROTO=TCP SPT=34372 DPT=9100 SEQ=311103029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1677A0000000001030307) Nov 28 04:14:20 localhost python3.9[119731]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:14:21 localhost python3.9[119823]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None Nov 28 04:14:23 localhost python3.9[119915]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:14:23 localhost python3.9[120007]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:14:24 localhost python3.9[120080]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321263.4484231-753-121942666321280/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:14:25 localhost python3.9[120172]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:14:27 localhost python3.9[120266]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None Nov 28 04:14:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60391 DF PROTO=TCP SPT=35486 DPT=9105 SEQ=3510398960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC182940000000001030307) Nov 28 04:14:28 localhost python3.9[120359]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None Nov 28 04:14:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60392 DF PROTO=TCP SPT=35486 DPT=9105 SEQ=3510398960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC186BA0000000001030307) Nov 28 04:14:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22689 DF PROTO=TCP SPT=34372 DPT=9100 SEQ=311103029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC186FA0000000001030307) Nov 28 04:14:29 localhost python3.9[120452]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 28 04:14:29 localhost python3.9[120550]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None Nov 28 04:14:30 localhost python3.9[120642]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:14:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19523 DF PROTO=TCP SPT=52600 DPT=9105 SEQ=3779108153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC192FA0000000001030307) Nov 28 04:14:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60394 DF PROTO=TCP SPT=35486 DPT=9105 SEQ=3510398960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC19E7B0000000001030307) Nov 28 04:14:38 localhost python3.9[120736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:14:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11677 DF PROTO=TCP SPT=54912 DPT=9102 SEQ=1513804586 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1AFBA0000000001030307) Nov 28 04:14:40 localhost python3.9[120828]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:14:41 localhost python3.9[120901]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321278.9719198-1025-179235887840623/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:14:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60395 DF PROTO=TCP SPT=35486 DPT=9105 SEQ=3510398960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1BEFA0000000001030307) Nov 28 04:14:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8681 DF PROTO=TCP SPT=58080 DPT=9882 SEQ=3609277679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1C2FA0000000001030307) Nov 28 04:14:44 localhost python3.9[120993]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:14:44 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 28 04:14:44 localhost systemd[1]: Stopped Load Kernel Modules. Nov 28 04:14:44 localhost systemd[1]: Stopping Load Kernel Modules... Nov 28 04:14:44 localhost systemd[1]: Starting Load Kernel Modules... Nov 28 04:14:44 localhost systemd-modules-load[120997]: Module 'msr' is built in Nov 28 04:14:44 localhost systemd[1]: Finished Load Kernel Modules. Nov 28 04:14:46 localhost python3.9[121089]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:14:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58032 DF PROTO=TCP SPT=35450 DPT=9100 SEQ=3749418050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1CCBA0000000001030307) Nov 28 04:14:47 localhost python3.9[121162]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321285.9744039-1094-181237401562284/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:14:48 localhost python3.9[121254]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:14:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58033 DF PROTO=TCP SPT=35450 DPT=9100 SEQ=3749418050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1DC7A0000000001030307) Nov 28 04:14:56 localhost python3.9[121346]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:14:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3262 DF PROTO=TCP SPT=43256 DPT=9105 SEQ=2018064289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1F7C30000000001030307) Nov 28 04:14:57 localhost python3.9[121438]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile Nov 28 04:14:58 localhost python3.9[121528]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:14:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3263 DF PROTO=TCP SPT=43256 DPT=9105 SEQ=2018064289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1FBBB0000000001030307) Nov 28 04:14:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4138 DF PROTO=TCP SPT=33730 DPT=9882 SEQ=2757824583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC1FCEC0000000001030307) Nov 28 04:14:59 localhost python3.9[121620]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:14:59 localhost systemd[1]: Stopping Dynamic System Tuning Daemon... Nov 28 04:14:59 localhost systemd[1]: tuned.service: Deactivated successfully. Nov 28 04:14:59 localhost systemd[1]: Stopped Dynamic System Tuning Daemon. Nov 28 04:14:59 localhost systemd[1]: tuned.service: Consumed 1.828s CPU time, no IO. Nov 28 04:14:59 localhost systemd[1]: Starting Dynamic System Tuning Daemon... Nov 28 04:15:00 localhost systemd[1]: Started Dynamic System Tuning Daemon. Nov 28 04:15:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4140 DF PROTO=TCP SPT=33730 DPT=9882 SEQ=2757824583 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC208FA0000000001030307) Nov 28 04:15:02 localhost python3.9[121723]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline Nov 28 04:15:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3265 DF PROTO=TCP SPT=43256 DPT=9105 SEQ=2018064289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2137A0000000001030307) Nov 28 04:15:06 localhost python3.9[121891]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:15:06 localhost systemd[1]: Reloading. Nov 28 04:15:06 localhost systemd-sysv-generator[121922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:15:06 localhost systemd-rc-local-generator[121917]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:15:06 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:15:07 localhost python3.9[122020]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:15:08 localhost systemd[1]: Reloading. Nov 28 04:15:08 localhost systemd-rc-local-generator[122046]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:15:08 localhost systemd-sysv-generator[122049]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:15:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:15:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57001 DF PROTO=TCP SPT=39756 DPT=9102 SEQ=531856499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC224FA0000000001030307) Nov 28 04:15:09 localhost python3.9[122150]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:15:10 localhost python3.9[122243]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:15:10 localhost kernel: Adding 1048572k swap on /swap. Priority:-2 extents:1 across:1048572k FS Nov 28 04:15:11 localhost python3.9[122336]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:15:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3266 DF PROTO=TCP SPT=43256 DPT=9105 SEQ=2018064289 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC232FB0000000001030307) Nov 28 04:15:12 localhost python3.9[122435]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:15:13 localhost python3.9[122528]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:15:13 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 28 04:15:13 localhost systemd[1]: Stopped Apply Kernel Variables. Nov 28 04:15:13 localhost systemd[1]: Stopping Apply Kernel Variables... Nov 28 04:15:13 localhost systemd[1]: Starting Apply Kernel Variables... Nov 28 04:15:13 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 28 04:15:13 localhost systemd[1]: Finished Apply Kernel Variables. Nov 28 04:15:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53550 DF PROTO=TCP SPT=37978 DPT=9101 SEQ=3619797123 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC238C10000000001030307) Nov 28 04:15:14 localhost systemd-logind[763]: Session 38 logged out. Waiting for processes to exit. Nov 28 04:15:14 localhost systemd[1]: session-38.scope: Deactivated successfully. Nov 28 04:15:14 localhost systemd[1]: session-38.scope: Consumed 1min 55.779s CPU time. Nov 28 04:15:14 localhost systemd-logind[763]: Removed session 38. Nov 28 04:15:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21611 DF PROTO=TCP SPT=49132 DPT=9100 SEQ=1534770021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC241FA0000000001030307) Nov 28 04:15:19 localhost sshd[122548]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:15:19 localhost systemd-logind[763]: New session 39 of user zuul. Nov 28 04:15:19 localhost systemd[1]: Started Session 39 of User zuul. Nov 28 04:15:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21612 DF PROTO=TCP SPT=49132 DPT=9100 SEQ=1534770021 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC251BB0000000001030307) Nov 28 04:15:20 localhost python3.9[122641]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:15:22 localhost python3.9[122735]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:15:23 localhost python3.9[122831]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:15:24 localhost python3.9[122922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:15:25 localhost python3.9[123018]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:15:26 localhost python3.9[123072]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:15:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14641 DF PROTO=TCP SPT=57744 DPT=9105 SEQ=919896499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC26CF30000000001030307) Nov 28 04:15:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14642 DF PROTO=TCP SPT=57744 DPT=9105 SEQ=919896499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC270FA0000000001030307) Nov 28 04:15:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53300 DF PROTO=TCP SPT=44682 DPT=9882 SEQ=1226433098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2721C0000000001030307) Nov 28 04:15:30 localhost python3.9[123166]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:15:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60397 DF PROTO=TCP SPT=35486 DPT=9105 SEQ=3510398960 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC27CFA0000000001030307) Nov 28 04:15:32 localhost python3.9[123313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:15:32 localhost python3.9[123405]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:15:33 localhost python3.9[123508]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:15:34 localhost python3.9[123556]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:15:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14644 DF PROTO=TCP SPT=57744 DPT=9105 SEQ=919896499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC288BA0000000001030307) Nov 28 04:15:34 localhost python3.9[123648]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:15:35 localhost python3.9[123721]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321334.265909-325-172165940699182/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:15:36 localhost python3.9[123813]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:15:36 localhost python3.9[123905]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:15:37 localhost python3.9[123997]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:15:38 localhost python3.9[124089]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:15:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59198 DF PROTO=TCP SPT=48334 DPT=9102 SEQ=865906324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC29A3A0000000001030307) Nov 28 04:15:39 localhost python3.9[124179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:15:39 localhost python3.9[124273]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:15:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14645 DF PROTO=TCP SPT=57744 DPT=9105 SEQ=919896499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2A8FA0000000001030307) Nov 28 04:15:43 localhost python3.9[124367]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:15:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61599 DF PROTO=TCP SPT=40624 DPT=9101 SEQ=247155788 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2ADF00000000001030307) Nov 28 04:15:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34072 DF PROTO=TCP SPT=53536 DPT=9100 SEQ=3645617858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2B73A0000000001030307) Nov 28 04:15:48 localhost python3.9[124461]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:15:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34073 DF PROTO=TCP SPT=53536 DPT=9100 SEQ=3645617858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2C6FB0000000001030307) Nov 28 04:15:52 localhost python3.9[124561]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:15:56 localhost python3.9[124655]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:15:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22516 DF PROTO=TCP SPT=56974 DPT=9105 SEQ=3597303019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2E2230000000001030307) Nov 28 04:15:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22517 DF PROTO=TCP SPT=56974 DPT=9105 SEQ=3597303019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2E63A0000000001030307) Nov 28 04:15:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34074 DF PROTO=TCP SPT=53536 DPT=9100 SEQ=3645617858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2E6FA0000000001030307) Nov 28 04:16:00 localhost python3.9[124749]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:16:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46985 DF PROTO=TCP SPT=36826 DPT=9882 SEQ=3041031134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2F33A0000000001030307) Nov 28 04:16:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22519 DF PROTO=TCP SPT=56974 DPT=9105 SEQ=3597303019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC2FDFA0000000001030307) Nov 28 04:16:05 localhost python3.9[124843]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:16:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43010 DF PROTO=TCP SPT=44408 DPT=9102 SEQ=2239395787 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC30F7A0000000001030307) Nov 28 04:16:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22520 DF PROTO=TCP SPT=56974 DPT=9105 SEQ=3597303019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC31EFA0000000001030307) Nov 28 04:16:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46987 DF PROTO=TCP SPT=36826 DPT=9882 SEQ=3041031134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC322FA0000000001030307) Nov 28 04:16:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52592 DF PROTO=TCP SPT=40540 DPT=9100 SEQ=1621524664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC32C7A0000000001030307) Nov 28 04:16:18 localhost python3.9[125090]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:16:19 localhost python3.9[125195]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:16:19 localhost python3.9[125268]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764321378.4169908-724-107076641266109/.source.json _original_basename=.q3h4m5gm follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:16:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52593 DF PROTO=TCP SPT=40540 DPT=9100 SEQ=1621524664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC33C3A0000000001030307) Nov 28 04:16:20 localhost python3.9[125360]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 28 04:16:27 localhost podman[125373]: 2025-11-28 09:16:20.831944511 +0000 UTC m=+0.032047797 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Nov 28 04:16:27 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation. Nov 28 04:16:27 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:16:27 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:16:27 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:16:27 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:16:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48260 DF PROTO=TCP SPT=46364 DPT=9105 SEQ=3844461558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC357550000000001030307) Nov 28 04:16:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48261 DF PROTO=TCP SPT=46364 DPT=9105 SEQ=3844461558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC35B7B0000000001030307) Nov 28 04:16:28 localhost python3.9[125571]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 28 04:16:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30662 DF PROTO=TCP SPT=48928 DPT=9882 SEQ=267520267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC35C7C0000000001030307) Nov 28 04:16:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14647 DF PROTO=TCP SPT=57744 DPT=9105 SEQ=919896499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC366FA0000000001030307) Nov 28 04:16:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48263 DF PROTO=TCP SPT=46364 DPT=9105 SEQ=3844461558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3733B0000000001030307) Nov 28 04:16:36 localhost podman[125585]: 2025-11-28 09:16:28.766771503 +0000 UTC m=+0.043712025 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 28 04:16:38 localhost python3.9[125784]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 28 04:16:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29188 DF PROTO=TCP SPT=53792 DPT=9102 SEQ=512309727 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3847A0000000001030307) Nov 28 04:16:39 localhost podman[125797]: 2025-11-28 09:16:38.238471942 +0000 UTC m=+0.045082148 image pull quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified Nov 28 04:16:41 localhost python3.9[125963]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 28 04:16:42 localhost podman[125975]: 2025-11-28 09:16:41.273941039 +0000 UTC m=+0.044520821 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 04:16:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48264 DF PROTO=TCP SPT=46364 DPT=9105 SEQ=3844461558 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC392FA0000000001030307) Nov 28 04:16:43 localhost python3.9[126140]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 28 04:16:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23727 DF PROTO=TCP SPT=36496 DPT=9101 SEQ=3608160821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC398510000000001030307) Nov 28 04:16:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11782 DF PROTO=TCP SPT=34516 DPT=9100 SEQ=3977907378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3A17A0000000001030307) Nov 28 04:16:46 localhost podman[126153]: 2025-11-28 09:16:43.588973079 +0000 UTC m=+0.044376006 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Nov 28 04:16:47 localhost python3.9[126329]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None Nov 28 04:16:49 localhost podman[126343]: 2025-11-28 09:16:47.954886251 +0000 UTC m=+0.031371296 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Nov 28 04:16:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11783 DF PROTO=TCP SPT=34516 DPT=9100 SEQ=3977907378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3B13A0000000001030307) Nov 28 04:16:51 localhost systemd-logind[763]: Session 39 logged out. Waiting for processes to exit. Nov 28 04:16:51 localhost systemd[1]: session-39.scope: Deactivated successfully. Nov 28 04:16:51 localhost systemd[1]: session-39.scope: Consumed 1min 31.619s CPU time. Nov 28 04:16:51 localhost systemd-logind[763]: Removed session 39. Nov 28 04:16:57 localhost sshd[126448]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:16:57 localhost systemd-logind[763]: New session 40 of user zuul. Nov 28 04:16:57 localhost systemd[1]: Started Session 40 of User zuul. Nov 28 04:16:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44103 DF PROTO=TCP SPT=46612 DPT=9105 SEQ=2192743834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3CC830000000001030307) Nov 28 04:16:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44104 DF PROTO=TCP SPT=46612 DPT=9105 SEQ=2192743834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3D07A0000000001030307) Nov 28 04:16:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11784 DF PROTO=TCP SPT=34516 DPT=9100 SEQ=3977907378 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3D0FA0000000001030307) Nov 28 04:16:58 localhost python3.9[126797]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:17:00 localhost python3.9[126893]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None Nov 28 04:17:01 localhost python3.9[126986]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:17:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22522 DF PROTO=TCP SPT=56974 DPT=9105 SEQ=3597303019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3DCFB0000000001030307) Nov 28 04:17:02 localhost python3.9[127040]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:17:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44106 DF PROTO=TCP SPT=46612 DPT=9105 SEQ=2192743834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3E83A0000000001030307) Nov 28 04:17:07 localhost python3.9[127243]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:17:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21191 DF PROTO=TCP SPT=35300 DPT=9102 SEQ=3701130136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC3F9BA0000000001030307) Nov 28 04:17:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44107 DF PROTO=TCP SPT=46612 DPT=9105 SEQ=2192743834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC408FA0000000001030307) Nov 28 04:17:13 localhost python3.9[127666]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:17:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52818 DF PROTO=TCP SPT=54796 DPT=9882 SEQ=3235994454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC40CFA0000000001030307) Nov 28 04:17:15 localhost python3.9[127759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:17:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39728 DF PROTO=TCP SPT=41438 DPT=9100 SEQ=1105269456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC416BA0000000001030307) Nov 28 04:17:16 localhost python3.9[127851]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None Nov 28 04:17:18 localhost kernel: SELinux: Converting 2743 SID table entries... Nov 28 04:17:18 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:17:18 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:17:18 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:17:18 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:17:18 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:17:18 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:17:18 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:17:19 localhost python3.9[128026]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:17:20 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=18 res=1 Nov 28 04:17:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39729 DF PROTO=TCP SPT=41438 DPT=9100 SEQ=1105269456 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4267A0000000001030307) Nov 28 04:17:20 localhost python3.9[128124]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:17:24 localhost python3.9[128218]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:17:26 localhost python3.9[128463]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 04:17:26 localhost python3.9[128553]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:17:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46985 DF PROTO=TCP SPT=35882 DPT=9105 SEQ=2967770997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC441B30000000001030307) Nov 28 04:17:27 localhost python3.9[128647]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:17:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46986 DF PROTO=TCP SPT=35882 DPT=9105 SEQ=2967770997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC445BB0000000001030307) Nov 28 04:17:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22094 DF PROTO=TCP SPT=32908 DPT=9882 SEQ=1920140342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC446DB0000000001030307) Nov 28 04:17:31 localhost python3.9[128741]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:17:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22096 DF PROTO=TCP SPT=32908 DPT=9882 SEQ=1920140342 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC452FA0000000001030307) Nov 28 04:17:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46988 DF PROTO=TCP SPT=35882 DPT=9105 SEQ=2967770997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC45D7A0000000001030307) Nov 28 04:17:35 localhost python3.9[128835]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 28 04:17:36 localhost systemd[1]: Reloading. Nov 28 04:17:36 localhost systemd-rc-local-generator[128866]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:17:36 localhost systemd-sysv-generator[128870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:17:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:17:38 localhost python3.9[128967]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:17:38 localhost python3.9[129059]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20253 DF PROTO=TCP SPT=57926 DPT=9102 SEQ=1309689648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC46EFA0000000001030307) Nov 28 04:17:39 localhost python3.9[129153]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:40 localhost python3.9[129245]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:40 localhost auditd[719]: Audit daemon rotating log files Nov 28 04:17:41 localhost python3.9[129337]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:17:41 localhost python3.9[129410]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321460.8703544-566-112929059883920/.source _original_basename=.ylwp0eto follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:42 localhost python3.9[129502]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46989 DF PROTO=TCP SPT=35882 DPT=9105 SEQ=2967770997 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC47CFA0000000001030307) Nov 28 04:17:43 localhost python3.9[129594]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={} Nov 28 04:17:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5870 DF PROTO=TCP SPT=37890 DPT=9101 SEQ=1158618655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC482B10000000001030307) Nov 28 04:17:44 localhost python3.9[129686]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:45 localhost python3.9[129778]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:17:45 localhost python3.9[129851]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321464.757868-691-220683502873993/.source.yaml _original_basename=.c9ttw3fb follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43548 DF PROTO=TCP SPT=36578 DPT=9100 SEQ=2429665641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC48BFB0000000001030307) Nov 28 04:17:46 localhost python3.9[129943]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml Nov 28 04:17:47 localhost ansible-async_wrapper.py[130048]: Invoked with j995671152243 300 /home/zuul/.ansible/tmp/ansible-tmp-1764321467.097456-763-247709448489443/AnsiballZ_edpm_os_net_config.py _ Nov 28 04:17:47 localhost ansible-async_wrapper.py[130051]: Starting module and watcher Nov 28 04:17:47 localhost ansible-async_wrapper.py[130051]: Start watching 130052 (300) Nov 28 04:17:47 localhost ansible-async_wrapper.py[130052]: Start module (130052) Nov 28 04:17:47 localhost ansible-async_wrapper.py[130048]: Return async_wrapper task started. Nov 28 04:17:48 localhost python3.9[130053]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False Nov 28 04:17:48 localhost ansible-async_wrapper.py[130052]: Module complete (130052) Nov 28 04:17:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43549 DF PROTO=TCP SPT=36578 DPT=9100 SEQ=2429665641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC49BBA0000000001030307) Nov 28 04:17:51 localhost python3.9[130145]: ansible-ansible.legacy.async_status Invoked with jid=j995671152243.130048 mode=status _async_dir=/root/.ansible_async Nov 28 04:17:52 localhost python3.9[130204]: ansible-ansible.legacy.async_status Invoked with jid=j995671152243.130048 mode=cleanup _async_dir=/root/.ansible_async Nov 28 04:17:52 localhost python3.9[130296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:17:52 localhost ansible-async_wrapper.py[130051]: Done in kid B. Nov 28 04:17:53 localhost python3.9[130369]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321472.3957-829-179115735629208/.source.returncode _original_basename=.qtpdiqhn follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:54 localhost python3.9[130461]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:17:54 localhost python3.9[130534]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321473.598801-877-179571465957105/.source.cfg _original_basename=.a89rgtaa follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:17:55 localhost python3.9[130626]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:17:55 localhost systemd[1]: Reloading Network Manager... Nov 28 04:17:55 localhost NetworkManager[5965]: [1764321475.4543] audit: op="reload" arg="0" pid=130630 uid=0 result="success" Nov 28 04:17:55 localhost NetworkManager[5965]: [1764321475.4554] config: signal: SIGHUP (no changes from disk) Nov 28 04:17:55 localhost systemd[1]: Reloaded Network Manager. Nov 28 04:17:56 localhost systemd[1]: session-40.scope: Deactivated successfully. Nov 28 04:17:56 localhost systemd[1]: session-40.scope: Consumed 35.992s CPU time. Nov 28 04:17:56 localhost systemd-logind[763]: Session 40 logged out. Waiting for processes to exit. Nov 28 04:17:56 localhost systemd-logind[763]: Removed session 40. Nov 28 04:17:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2392 DF PROTO=TCP SPT=50554 DPT=9105 SEQ=3377560418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4B6E30000000001030307) Nov 28 04:17:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2393 DF PROTO=TCP SPT=50554 DPT=9105 SEQ=3377560418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4BAFB0000000001030307) Nov 28 04:17:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54886 DF PROTO=TCP SPT=53636 DPT=9882 SEQ=3097252902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4BC0D0000000001030307) Nov 28 04:18:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44109 DF PROTO=TCP SPT=46612 DPT=9105 SEQ=2192743834 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4C6FB0000000001030307) Nov 28 04:18:02 localhost sshd[130645]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:18:03 localhost systemd-logind[763]: New session 41 of user zuul. Nov 28 04:18:03 localhost systemd[1]: Started Session 41 of User zuul. Nov 28 04:18:03 localhost python3.9[130738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:18:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2395 DF PROTO=TCP SPT=50554 DPT=9105 SEQ=3377560418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4D2BA0000000001030307) Nov 28 04:18:05 localhost python3.9[130832]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:18:06 localhost python3.9[130977]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:18:06 localhost systemd[1]: session-41.scope: Deactivated successfully. Nov 28 04:18:06 localhost systemd[1]: session-41.scope: Consumed 2.245s CPU time. Nov 28 04:18:06 localhost systemd-logind[763]: Session 41 logged out. Waiting for processes to exit. Nov 28 04:18:06 localhost systemd-logind[763]: Removed session 41. Nov 28 04:18:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41137 DF PROTO=TCP SPT=56410 DPT=9102 SEQ=331650185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4E43A0000000001030307) Nov 28 04:18:11 localhost sshd[130993]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:18:12 localhost systemd-logind[763]: New session 42 of user zuul. Nov 28 04:18:12 localhost systemd[1]: Started Session 42 of User zuul. Nov 28 04:18:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2396 DF PROTO=TCP SPT=50554 DPT=9105 SEQ=3377560418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4F2FA0000000001030307) Nov 28 04:18:13 localhost python3.9[131116]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:18:13 localhost python3.9[131254]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:18:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22422 DF PROTO=TCP SPT=44324 DPT=9101 SEQ=264599625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC4F7E10000000001030307) Nov 28 04:18:15 localhost python3.9[131386]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:18:15 localhost python3.9[131440]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:18:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27699 DF PROTO=TCP SPT=42218 DPT=9100 SEQ=2855922814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC500FA0000000001030307) Nov 28 04:18:19 localhost python3.9[131549]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:18:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27700 DF PROTO=TCP SPT=42218 DPT=9100 SEQ=2855922814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC510BB0000000001030307) Nov 28 04:18:21 localhost python3.9[131696]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:18:22 localhost python3.9[131788]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:18:22 localhost python3.9[131891]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:18:23 localhost python3.9[131939]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:18:24 localhost python3.9[132031]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:18:24 localhost python3.9[132079]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:18:25 localhost python3.9[132171]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:18:26 localhost python3.9[132263]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:18:26 localhost python3.9[132355]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:18:27 localhost python3.9[132447]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None Nov 28 04:18:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52032 DF PROTO=TCP SPT=51260 DPT=9105 SEQ=992114278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC52C130000000001030307) Nov 28 04:18:28 localhost python3.9[132539]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:18:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52033 DF PROTO=TCP SPT=51260 DPT=9105 SEQ=992114278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5303A0000000001030307) Nov 28 04:18:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27701 DF PROTO=TCP SPT=42218 DPT=9100 SEQ=2855922814 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC530FA0000000001030307) Nov 28 04:18:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6537 DF PROTO=TCP SPT=35140 DPT=9882 SEQ=3504925525 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC53D3A0000000001030307) Nov 28 04:18:32 localhost python3.9[132633]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:18:33 localhost python3.9[132727]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:18:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:18:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4784 writes, 637 syncs, 7.51 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:18:34 localhost python3.9[132819]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:18:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52035 DF PROTO=TCP SPT=51260 DPT=9105 SEQ=992114278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC547FA0000000001030307) Nov 28 04:18:35 localhost python3.9[132911]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:18:36 localhost python3.9[133004]: ansible-service_facts Invoked Nov 28 04:18:36 localhost network[133021]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:18:36 localhost network[133022]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:18:36 localhost network[133023]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:18:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:18:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.2 total, 600.0 interval#012Cumulative writes: 5781 writes, 25K keys, 5781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5781 writes, 729 syncs, 7.93 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:18:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48615 DF PROTO=TCP SPT=35222 DPT=9102 SEQ=462316261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5593A0000000001030307) Nov 28 04:18:39 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52036 DF PROTO=TCP SPT=51260 DPT=9105 SEQ=992114278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC568FB0000000001030307) Nov 28 04:18:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26912 DF PROTO=TCP SPT=60128 DPT=9100 SEQ=1427177641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC56A430000000001030307) Nov 28 04:18:45 localhost python3.9[133345]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:18:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26914 DF PROTO=TCP SPT=60128 DPT=9100 SEQ=1427177641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5763A0000000001030307) Nov 28 04:18:50 localhost python3.9[133439]: ansible-package_facts Invoked with manager=['auto'] strategy=first Nov 28 04:18:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26915 DF PROTO=TCP SPT=60128 DPT=9100 SEQ=1427177641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC585FA0000000001030307) Nov 28 04:18:51 localhost python3.9[133531]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:18:52 localhost python3.9[133606]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321531.0919702-660-118800461441946/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:18:53 localhost python3.9[133700]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:18:53 localhost python3.9[133775]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321532.596823-704-84088194372708/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:18:55 localhost python3.9[133869]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:18:56 localhost python3.9[133963]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:18:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37810 DF PROTO=TCP SPT=40280 DPT=9105 SEQ=1448376460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5A1430000000001030307) Nov 28 04:18:58 localhost python3.9[134017]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:18:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37811 DF PROTO=TCP SPT=40280 DPT=9105 SEQ=1448376460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5A53A0000000001030307) Nov 28 04:18:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7316 DF PROTO=TCP SPT=48208 DPT=9882 SEQ=3142136880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5A66C0000000001030307) Nov 28 04:19:00 localhost python3.9[134111]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:19:01 localhost python3.9[134165]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:19:01 localhost chronyd[26579]: chronyd exiting Nov 28 04:19:01 localhost systemd[1]: Stopping NTP client/server... Nov 28 04:19:01 localhost systemd[1]: chronyd.service: Deactivated successfully. Nov 28 04:19:01 localhost systemd[1]: Stopped NTP client/server. Nov 28 04:19:01 localhost systemd[1]: Starting NTP client/server... Nov 28 04:19:01 localhost chronyd[134173]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG) Nov 28 04:19:01 localhost chronyd[134173]: Frequency -30.412 +/- 0.251 ppm read from /var/lib/chrony/drift Nov 28 04:19:01 localhost chronyd[134173]: Loaded seccomp filter (level 2) Nov 28 04:19:01 localhost systemd[1]: Started NTP client/server. Nov 28 04:19:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2398 DF PROTO=TCP SPT=50554 DPT=9105 SEQ=3377560418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5B0FB0000000001030307) Nov 28 04:19:01 localhost systemd[1]: session-42.scope: Deactivated successfully. Nov 28 04:19:01 localhost systemd[1]: session-42.scope: Consumed 27.754s CPU time. Nov 28 04:19:01 localhost systemd-logind[763]: Session 42 logged out. Waiting for processes to exit. Nov 28 04:19:01 localhost systemd-logind[763]: Removed session 42. Nov 28 04:19:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37813 DF PROTO=TCP SPT=40280 DPT=9105 SEQ=1448376460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5BCFA0000000001030307) Nov 28 04:19:07 localhost sshd[134189]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:19:07 localhost systemd-logind[763]: New session 43 of user zuul. Nov 28 04:19:07 localhost systemd[1]: Started Session 43 of User zuul. Nov 28 04:19:08 localhost python3.9[134282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:19:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23043 DF PROTO=TCP SPT=34794 DPT=9102 SEQ=1845156581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5CE7A0000000001030307) Nov 28 04:19:10 localhost python3.9[134378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:10 localhost python3.9[134483]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:11 localhost python3.9[134531]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.hs4czwh6 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:12 localhost python3.9[134623]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37814 DF PROTO=TCP SPT=40280 DPT=9105 SEQ=1448376460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5DD1A0000000001030307) Nov 28 04:19:13 localhost python3.9[134698]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321552.157387-145-25053830111139/.source _original_basename=.zj4whxlu follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:14 localhost python3.9[134790]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:19:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18770 DF PROTO=TCP SPT=34950 DPT=9101 SEQ=3969529837 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5E2400000000001030307) Nov 28 04:19:14 localhost python3.9[134882]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:15 localhost python3.9[134955]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321554.387282-218-72689948104764/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:19:15 localhost python3.9[135047]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:16 localhost python3.9[135120]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321555.4753401-218-87334635381486/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:19:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27201 DF PROTO=TCP SPT=48680 DPT=9100 SEQ=234430106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5EB7A0000000001030307) Nov 28 04:19:17 localhost python3.9[135212]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:18 localhost python3.9[135348]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:18 localhost python3.9[135438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321557.6448386-328-72131090457722/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:19 localhost podman[135545]: Nov 28 04:19:19 localhost podman[135545]: 2025-11-28 09:19:19.207321098 +0000 UTC m=+0.079643872 container create 6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_spence, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , release=553, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main) Nov 28 04:19:19 localhost systemd[1]: Started libpod-conmon-6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991.scope. Nov 28 04:19:19 localhost systemd[1]: Started libcrun container. Nov 28 04:19:19 localhost podman[135545]: 2025-11-28 09:19:19.173779876 +0000 UTC m=+0.046102660 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:19:19 localhost podman[135545]: 2025-11-28 09:19:19.288172106 +0000 UTC m=+0.160494840 container init 6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_spence, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, CEPH_POINT_RELEASE=) Nov 28 04:19:19 localhost podman[135545]: 2025-11-28 09:19:19.300134563 +0000 UTC m=+0.172457307 container start 6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_spence, io.buildah.version=1.33.12, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:19:19 localhost podman[135545]: 2025-11-28 09:19:19.301157494 +0000 UTC m=+0.173480268 container attach 6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_spence, release=553, vendor=Red Hat, Inc., name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:19:19 localhost friendly_spence[135583]: 167 167 Nov 28 04:19:19 localhost systemd[1]: libpod-6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991.scope: Deactivated successfully. Nov 28 04:19:19 localhost podman[135545]: 2025-11-28 09:19:19.307465188 +0000 UTC m=+0.179787972 container died 6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_spence, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vcs-type=git, io.buildah.version=1.33.12, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7) Nov 28 04:19:19 localhost podman[135602]: 2025-11-28 09:19:19.400993767 +0000 UTC m=+0.086340418 container remove 6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_spence, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=553, architecture=x86_64, version=7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.33.12, GIT_BRANCH=main, description=Red Hat Ceph Storage 7) Nov 28 04:19:19 localhost systemd[1]: libpod-conmon-6185c59ccc6b361817e96770c2f80db049cbf66d4b4aae6b455916496b16c991.scope: Deactivated successfully. Nov 28 04:19:19 localhost podman[135641]: Nov 28 04:19:19 localhost python3.9[135633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:19 localhost podman[135641]: 2025-11-28 09:19:19.61237214 +0000 UTC m=+0.070035226 container create 732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_dhawan, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, version=7, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, release=553, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.) Nov 28 04:19:19 localhost podman[135641]: 2025-11-28 09:19:19.574140914 +0000 UTC m=+0.031804040 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:19:19 localhost systemd[1]: Started libpod-conmon-732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da.scope. Nov 28 04:19:19 localhost systemd[1]: Started libcrun container. Nov 28 04:19:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16a45510d8a0af781d2a925b6188ddea1809f93b477f1e6800b24e01d484c50/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 04:19:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16a45510d8a0af781d2a925b6188ddea1809f93b477f1e6800b24e01d484c50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:19:19 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16a45510d8a0af781d2a925b6188ddea1809f93b477f1e6800b24e01d484c50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 04:19:19 localhost podman[135641]: 2025-11-28 09:19:19.70597518 +0000 UTC m=+0.163638246 container init 732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_dhawan, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True) Nov 28 04:19:19 localhost podman[135641]: 2025-11-28 09:19:19.716634509 +0000 UTC m=+0.174297575 container start 732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_dhawan, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, RELEASE=main) Nov 28 04:19:19 localhost podman[135641]: 2025-11-28 09:19:19.716776283 +0000 UTC m=+0.174439339 container attach 732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_dhawan, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Nov 28 04:19:20 localhost python3.9[135736]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321559.0844204-373-108617403463863/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:20 localhost systemd[1]: var-lib-containers-storage-overlay-6964407b0d14846ddd213b8ca0f4ccbf7bc1e50105065c9c28f341fa01c0aa42-merged.mount: Deactivated successfully. Nov 28 04:19:20 localhost eloquent_dhawan[135670]: [ Nov 28 04:19:20 localhost eloquent_dhawan[135670]: { Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "available": false, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "ceph_device": false, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "lsm_data": {}, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "lvs": [], Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "path": "/dev/sr0", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "rejected_reasons": [ Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "Insufficient space (<5GB)", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "Has a FileSystem" Nov 28 04:19:20 localhost eloquent_dhawan[135670]: ], Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "sys_api": { Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "actuators": null, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "device_nodes": "sr0", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "human_readable_size": "482.00 KB", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "id_bus": "ata", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "model": "QEMU DVD-ROM", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "nr_requests": "2", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "partitions": {}, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "path": "/dev/sr0", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "removable": "1", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "rev": "2.5+", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "ro": "0", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "rotational": "1", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "sas_address": "", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "sas_device_handle": "", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "scheduler_mode": "mq-deadline", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "sectors": 0, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "sectorsize": "2048", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "size": 493568.0, Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "support_discard": "0", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "type": "disk", Nov 28 04:19:20 localhost eloquent_dhawan[135670]: "vendor": "QEMU" Nov 28 04:19:20 localhost eloquent_dhawan[135670]: } Nov 28 04:19:20 localhost eloquent_dhawan[135670]: } Nov 28 04:19:20 localhost eloquent_dhawan[135670]: ] Nov 28 04:19:20 localhost systemd[1]: libpod-732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da.scope: Deactivated successfully. Nov 28 04:19:20 localhost podman[135641]: 2025-11-28 09:19:20.564040061 +0000 UTC m=+1.021703197 container died 732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_dhawan, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, release=553, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:19:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27202 DF PROTO=TCP SPT=48680 DPT=9100 SEQ=234430106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC5FB3A0000000001030307) Nov 28 04:19:20 localhost systemd[1]: tmp-crun.yo5Dvj.mount: Deactivated successfully. Nov 28 04:19:20 localhost systemd[1]: var-lib-containers-storage-overlay-a16a45510d8a0af781d2a925b6188ddea1809f93b477f1e6800b24e01d484c50-merged.mount: Deactivated successfully. Nov 28 04:19:20 localhost podman[137201]: 2025-11-28 09:19:20.646721106 +0000 UTC m=+0.076074213 container remove 732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eloquent_dhawan, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, release=553, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:19:20 localhost systemd[1]: libpod-conmon-732e0f4686b57e1561c9fbb72b502d15d89c1bffac5406b83f7a3de2b2a3c0da.scope: Deactivated successfully. Nov 28 04:19:21 localhost python3.9[137260]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:19:21 localhost systemd[1]: Reloading. Nov 28 04:19:21 localhost systemd-sysv-generator[137288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:19:21 localhost systemd-rc-local-generator[137285]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:19:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:19:21 localhost systemd[1]: Reloading. Nov 28 04:19:21 localhost systemd-sysv-generator[137342]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:19:21 localhost systemd-rc-local-generator[137336]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:19:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:19:21 localhost systemd[1]: Starting EDPM Container Shutdown... Nov 28 04:19:21 localhost systemd[1]: Finished EDPM Container Shutdown. Nov 28 04:19:22 localhost python3.9[137444]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:23 localhost python3.9[137517]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321562.0514174-442-253686083774015/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:23 localhost python3.9[137609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:24 localhost python3.9[137682]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321563.3922908-488-70195843921004/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:25 localhost python3.9[137774]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:19:25 localhost systemd[1]: Reloading. Nov 28 04:19:25 localhost systemd-sysv-generator[137802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:19:25 localhost systemd-rc-local-generator[137797]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:19:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:19:25 localhost systemd[1]: Starting Create netns directory... Nov 28 04:19:25 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 04:19:25 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 04:19:25 localhost systemd[1]: Finished Create netns directory. Nov 28 04:19:26 localhost python3.9[137906]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:19:26 localhost network[137923]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:19:26 localhost network[137924]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:19:26 localhost network[137925]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:19:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38718 DF PROTO=TCP SPT=54040 DPT=9105 SEQ=3331519835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC616720000000001030307) Nov 28 04:19:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:19:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38719 DF PROTO=TCP SPT=54040 DPT=9105 SEQ=3331519835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC61A7A0000000001030307) Nov 28 04:19:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27203 DF PROTO=TCP SPT=48680 DPT=9100 SEQ=234430106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC61AFA0000000001030307) Nov 28 04:19:30 localhost python3.9[138127]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:31 localhost python3.9[138202]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321570.1283126-610-265402222506364/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52038 DF PROTO=TCP SPT=51260 DPT=9105 SEQ=992114278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC626FB0000000001030307) Nov 28 04:19:32 localhost python3.9[138295]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:19:32 localhost systemd[1]: Reloading OpenSSH server daemon... Nov 28 04:19:32 localhost systemd[1]: Reloaded OpenSSH server daemon. Nov 28 04:19:32 localhost sshd[117864]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:19:32 localhost python3.9[138391]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:33 localhost python3.9[138483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:34 localhost python3.9[138556]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321573.2163177-705-3039262264424/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38721 DF PROTO=TCP SPT=54040 DPT=9105 SEQ=3331519835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC6323A0000000001030307) Nov 28 04:19:35 localhost python3.9[138648]: ansible-community.general.timezone Invoked with name=UTC hwclock=None Nov 28 04:19:35 localhost systemd[1]: Starting Time & Date Service... Nov 28 04:19:35 localhost systemd[1]: Started Time & Date Service. Nov 28 04:19:37 localhost python3.9[138744]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:38 localhost python3.9[138836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:38 localhost python3.9[138909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321577.7613251-810-155713258236950/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61991 DF PROTO=TCP SPT=59476 DPT=9102 SEQ=1768630148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC643BA0000000001030307) Nov 28 04:19:39 localhost python3.9[139001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:40 localhost python3.9[139074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321579.04462-854-185558833826055/.source.yaml _original_basename=.g6c8i0jw follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:40 localhost python3.9[139166]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:41 localhost python3.9[139241]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321580.2987318-898-34897107821847/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:42 localhost python3.9[139333]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:19:42 localhost python3.9[139426]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:19:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38722 DF PROTO=TCP SPT=54040 DPT=9105 SEQ=3331519835 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC652FA0000000001030307) Nov 28 04:19:43 localhost python3[139519]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 28 04:19:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51114 DF PROTO=TCP SPT=35212 DPT=9882 SEQ=1137333229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC656FA0000000001030307) Nov 28 04:19:44 localhost python3.9[139611]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:45 localhost python3.9[139684]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321584.2228937-1016-153505989421412/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:46 localhost python3.9[139776]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:46 localhost python3.9[139849]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321585.5609047-1060-186373686427954/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:47 localhost python3.9[139941]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61992 DF PROTO=TCP SPT=59476 DPT=9102 SEQ=1768630148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC664FA0000000001030307) Nov 28 04:19:48 localhost python3.9[140014]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321586.9611557-1106-64615816316335/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:48 localhost python3.9[140106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:49 localhost python3.9[140179]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321588.2086613-1151-279271851122672/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:50 localhost python3.9[140271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:19:50 localhost python3.9[140344]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321589.5293438-1196-12187398475216/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:51 localhost python3.9[140436]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:52 localhost python3.9[140528]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:19:52 localhost python3.9[140623]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:53 localhost python3.9[140716]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:54 localhost python3.9[140808]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:19:55 localhost python3.9[140900]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Nov 28 04:19:55 localhost python3.9[140993]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None Nov 28 04:19:56 localhost systemd[1]: session-43.scope: Deactivated successfully. Nov 28 04:19:56 localhost systemd[1]: session-43.scope: Consumed 28.826s CPU time. Nov 28 04:19:56 localhost systemd-logind[763]: Session 43 logged out. Waiting for processes to exit. Nov 28 04:19:56 localhost systemd-logind[763]: Removed session 43. Nov 28 04:19:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37742 DF PROTO=TCP SPT=38210 DPT=9105 SEQ=3067797486 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC68BA60000000001030307) Nov 28 04:19:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52081 DF PROTO=TCP SPT=58050 DPT=9882 SEQ=2195339610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC690CC0000000001030307) Nov 28 04:20:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37816 DF PROTO=TCP SPT=40280 DPT=9105 SEQ=1448376460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC69AFA0000000001030307) Nov 28 04:20:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35545 DF PROTO=TCP SPT=58008 DPT=9102 SEQ=2439919783 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC69D190000000001030307) Nov 28 04:20:02 localhost sshd[141009]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:20:02 localhost systemd-logind[763]: New session 44 of user zuul. Nov 28 04:20:03 localhost systemd[1]: Started Session 44 of User zuul. Nov 28 04:20:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7322 DF PROTO=TCP SPT=48208 DPT=9882 SEQ=3142136880 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC6A0FB0000000001030307) Nov 28 04:20:03 localhost python3.9[141104]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None Nov 28 04:20:05 localhost python3.9[141196]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:20:05 localhost systemd[1]: systemd-timedated.service: Deactivated successfully. Nov 28 04:20:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23046 DF PROTO=TCP SPT=34794 DPT=9102 SEQ=1845156581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC6ACFB0000000001030307) Nov 28 04:20:06 localhost python3.9[141292]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts Nov 28 04:20:08 localhost python3.9[141384]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.nrwpmgem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:20:08 localhost python3.9[141459]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.nrwpmgem mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321607.794715-192-227701633114446/.source.nrwpmgem _original_basename=.ole86qdk follow=False checksum=37b6ce2b006ecd64876d6796769d1ed663c9f074 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:11 localhost python3.9[141551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:20:12 localhost python3.9[141643]: ansible-ansible.builtin.blockinfile Invoked with block=np0005538513.localdomain,192.168.122.106,np0005538513* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCToHi/c1OL/UxMWy2v/t0tcvSlMeoKa6EPBYbcu51p2Gn2UxEPgCRLM9+84Smh2pxAR4Y/5LVm2lbZ9Gf4okHGg5GLIyqzxxqbQHyR+YRljujVEOvksUPuKCptzx9fQj2Ij2t9GPGHc5klgGPIKjx0pza8T37vdz+G9y7zuK5wWI66AeN8y/6dD2hvi1Lp94VRSvTTEo+nUOFSIgsOwqQO+ZSwTgjG1pmtESBe8nkhW0I0BQPX46v9f1PN1LXDg8cN2FSVjQ91RI0uCvTaBYJ3soFBFspgiJ113zapbQCaNwg7lK7ofS0QT5WONP3QIsDAq1gSpWuOdS2DRY4NU3WMd4m5tLbj+ubiWr39rNU/zQiEl8r38aiM0OwOfuQ9S8wxO7phpVCQrbOkYCLLijdy/xTODvP+jYohTMWX8Gh6IVeVtm6SB2Tw3lDBCjpqlclCSs905Xe+mTJ6WYTaz+Q1xgflKEeemzJ0+rt+QZbrmL7u5MUdf/l/yOLAgACNsws=#012np0005538513.localdomain,192.168.122.106,np0005538513* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAcFP+DjLmcEEAm8Lwvxl6FPIO6oOWnH/RhIcXcMqT1F#012np0005538513.localdomain,192.168.122.106,np0005538513* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBCKBYRInRUdTiZ6KYKN+DMW+w3dTbv2b2ZRO5doLdo2BjNWxCzSevWq4Ptdwg4i7AwfVsH37MVU5ijvc8yJB7o=#012np0005538512.localdomain,192.168.122.105,np0005538512* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCy9/gxqH+eMqafXwUuPf+1Clpw4qsugdFefisnCDhJ5U7Pc+eWMUQVMS0ErxabBJhneDOyPwXwIbv72cEAtmgfvHDlSuS3mt8LRzKqsv1dXTy4Zqb3JGVzrvxo0iczGRsn2MIDJUv/Zjq9YqVeCnDj2HOwV+qx+EFecEFXS797FxsnMmTw0A5z8yUtBuJEGAKQX96LpZc4k5ltq+Uy0rK85Kk7cGR4A+wrIChLC8wggxvA99NdPEBtne6Chb+3PcbYUcTGhGtV6FGzpgbWmuWT/gcANb+fJE5/4n87loLmBMsmvGhvQuN9kuJ20g6nwPJbPTpIbV6XALx4tbma68bL3RL+lcGlh3jf0pEXPfolrB/MRmJn5ggMLjRv50FrowQalnCEgWE0gtd9IGjmqFz3jP008bGotn9rcacbjC2AvE+5NEjp7TzXGnFcD6jW8+9AWiusCww4ULs/oWbi0GLkmhwU5EifitDYF2+r1CigAdlEjb6sa0wAQSmclWk6guM=#012np0005538512.localdomain,192.168.122.105,np0005538512* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIK28mLCPVbXy0OXsvv/yFemdmkq0TouDg2F8iIBtrFNP#012np0005538512.localdomain,192.168.122.105,np0005538512* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOAaBZ7v0nx9ZqEqgPbFZS0ak6RTWK6bkXL/jWgEJnhpVMoiRYOxmcwlW3qCW0ftaWYgMItu1j7anWibS+umVXI=#012np0005538511.localdomain,192.168.122.104,np0005538511* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDqtmgm0KAOOIJ7a8whlZPfasnwJpfcm6zVmQjiKHZZrcojE/a6oALfufKXbfWWiLjJ2VzyK9v7QPNXhIWxgAKT9J40A1lSpSAmmxMaWvy+hzzvePs0Z4Fc4bFX7V4zBGI+dAJ+eAu73z6OKNuMhxBrL46ejpRFbqjwBP3veWRiLOMbyPn+Wc+amop0p1eEzV2QHMAIC5Dwm6/tYNLixNSa/Ea0ciaY3jWii+IGhYy+wqQP+9qkoVf9bZ4Ewa+7UfXI/q4zvvic/Znb8ZpCpezLnH4ilBORLyV9r/wkkkVGY7UVgUdSoLVjzTGQAtHl2ZgA3zJ2F2ES9QcBEvrHygT4vGgtEaxQn8XFhBwhzCpPaLyXti/6d+8M36cJx+7gv1eEfDgLz3MNR+tcnFSew9N6dIN4afV0DvA/9FsWk8PTqddN4iHcZzRo0GiDJWNtB+gYVZOytTYMZm2Cyv59IthEzxaB+wTZoSdCeuEeTM0ohYspOKirIPqMPuCbGbtrJFE=#012np0005538511.localdomain,192.168.122.104,np0005538511* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOw3UAOk5rmRZZUABN/csr2bxG0kPuwFOfnLWM0dbphK#012np0005538511.localdomain,192.168.122.104,np0005538511* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOporAXIBWakUq++II8S8bptvpP8um9hXQ1t0EGSEC6CKLIa5aENxiSz3hPWhpfOMIda2pAiC8tHJ/ctg1cA7bI=#012np0005538515.localdomain,192.168.122.108,np0005538515* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbwc/gBZF5hmsFU6BSK1/DT5hduj2+3ukzoCGLU6mgpBv7BiInN7vVOqXilL+QUAWOvfKTekaQe1Vv/2jpygQnlu6MEMopmac/36IfVjgt39zxCULfSWv3Gp8tLP0ATF2LfhHBWFrGX7G3Bg3AiNfIUnQIQadBaKIByl+FfA7nJ7phwBAwJaQxvByGDeMwC2CWIUPgVqKclcw1WmldPnNmwquLlCbAeMV2hHlBfnVk8BI6fsOUcBB6a05zRpJpbrl584F+qkiQX0RpZYJQdZCoLiJStJv39lYhgiAWChUOVJsCbeNQnC9/Xgs5JhmRESgXh7Tm+8UNW9DxSHN7BS5qKYPUULdjobSp2v9pFOx30MLMsNd5r3JE07pgm5PpjuviSGEvJ8DIAPTF3kUXM43wax1q9rGV4ZfoJiLAwS9CmWWDWZDg17cnC5z+3qi+K8HUKz8LxQCHI+yEtTFzUEYyXTQfQbNvkauEHI/PwFA1iC+4/2g/0UhtjkM+FO2Czwk=#012np0005538515.localdomain,192.168.122.108,np0005538515* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVAkJQTOfLnB4ufl+yfJWTOwj/+yeZMYj9KPcqQhG41#012np0005538515.localdomain,192.168.122.108,np0005538515* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJxSQcYu8iH02KDWynHrNs+wu90XfG3ktCJ/ydvMFl7Khrh5CImI23f+XeJr4A7okpxJw7hhtVd+bcWjM/VGibU=#012np0005538510.localdomain,192.168.122.103,np0005538510* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAxqgPHnyGChl6yd1/HRo8ox+w8llSVhIj8iYUdDG7IquyLr4/CZguZzRkngbXi/Dq544iKS4kFL/zPKi+yuxeFs4b6fgo4vGoV8wwKNSJXx0d0hOQa9651VqB6k/trENRTgLa2fHkXgF+/g0f7HvloQfhr7qjhTBRV4l4UfJiOEpMvMxN6map/D0JuHlAZGZ5mGUoBTEMuPGEPvMWqe0kc/I8WIgsMsvijOGM2xDxsOqAYlV9a8faoyMdacWUNkeQTfPF6h+z8xdvP8qWPtrPKWHMpcGicTI6pFZ2JxOjWnMaBXs2j/CN7HFLbyOCwuAvAu9efAbxJvgtZlO++6kSlq7SHMzwv7PLP69GaQJHR+jANJ/O2BchbxL09mIkpFSzLSS0k7xXJlwqnAMciIlTaud2n5Hqnnb06WgtvD6O0nnuCLH5am7F1YDGJJgUmNbbgF1PuwzOZqQy+tA2igji/n2z87KkGZdIbrHdPU1PPIlzVGPO6aO02RhvtD+/iQM=#012np0005538510.localdomain,192.168.122.103,np0005538510* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO316T0CvGWuEUZtluJgtZ9ZZEUIgwqLNzmYcEgwx90d#012np0005538510.localdomain,192.168.122.103,np0005538510* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFs0shW57fSaFIES4CjKi1hUQjnXLq99+vhyRfpt8xn5+tcCwnrhlVxDAoMMHaxjmVGblslVcZ1lb3oEH51GZuE=#012np0005538514.localdomain,192.168.122.107,np0005538514* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLIqwhlOevSQuHXF0nrkLOzRoSQqnWWb/cXzK4um93clqGujVOE9PUyL6ONBo/qlr4Pp+QMzSsIFwjW1T/6G+Ce2CS/TGphIUxvvB9NhBt+OJl/zDUEmjAU6bwVIx6ApqtimsXWWIap9GEtVWA5P9pcqPMyGzq1mCzwCS252Ylioij0zZxfMrxTt3RSsWrDED61vRes0ZKd8HERTLN+Lzis5t8f74zfwTesOea6CRkIHth4cUP7ua3q+KhhbhIPj+fXWN5w+qVbcTMJSYyUPsZ2ymPhR4x4db1oPk1Jg14dw1BnmAZZl3v8o4l7bUQ2Fj/PE1JbSiApxbK+V0KdZGMrG4iVbnMmzwBXPXHa6lNQGneflVd3MNEepnTnXx4hAVpaJHc8EtIREq8aPe07DW0wL9clpTKaSGU2Ma+BLXmSDPkuPh6JWLxn3iM1yybL574NnGt2MgBj6z2tiSb4NkNmaBkoG8PMHw8YUSabKBBZNiMEO2GKBpHZldSrYvOZHU=#012np0005538514.localdomain,192.168.122.107,np0005538514* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINhivqz2RYo1kKlRUCCEwVKn/fRbUXKh+9HKcoRBbRik#012np0005538514.localdomain,192.168.122.107,np0005538514* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj7Mfl3DOkiBgUjao8Ey8r/pUITSMDHIaEViUpgeShgnNz3/omNuAseQqHK6/tA9gN/Uo8Pq1wRSxeBtUVD++U=#012 create=True mode=0644 path=/tmp/ansible.nrwpmgem state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22627 DF PROTO=TCP SPT=44208 DPT=9100 SEQ=3514915242 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC6C9D20000000001030307) Nov 28 04:20:14 localhost python3.9[141735]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.nrwpmgem' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:20:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61585 DF PROTO=TCP SPT=38078 DPT=9101 SEQ=1927108180 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC6CCA00000000001030307) Nov 28 04:20:16 localhost python3.9[141829]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.nrwpmgem state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:17 localhost systemd[1]: session-44.scope: Deactivated successfully. Nov 28 04:20:17 localhost systemd[1]: session-44.scope: Consumed 4.157s CPU time. Nov 28 04:20:17 localhost systemd-logind[763]: Session 44 logged out. Waiting for processes to exit. Nov 28 04:20:17 localhost systemd-logind[763]: Removed session 44. Nov 28 04:20:24 localhost sshd[141907]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:20:24 localhost systemd-logind[763]: New session 45 of user zuul. Nov 28 04:20:24 localhost systemd[1]: Started Session 45 of User zuul. Nov 28 04:20:25 localhost python3.9[142015]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:20:27 localhost python3.9[142111]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 28 04:20:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43824 DF PROTO=TCP SPT=43266 DPT=9105 SEQ=3828650478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC700D30000000001030307) Nov 28 04:20:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33956 DF PROTO=TCP SPT=60000 DPT=9882 SEQ=3871908273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC705FC0000000001030307) Nov 28 04:20:29 localhost python3.9[142205]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:20:30 localhost python3.9[142298]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:20:30 localhost python3.9[142391]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:20:31 localhost python3.9[142485]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:20:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42022 DF PROTO=TCP SPT=49360 DPT=9102 SEQ=3740878924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC712490000000001030307) Nov 28 04:20:32 localhost python3.9[142580]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:32 localhost systemd[1]: session-45.scope: Deactivated successfully. Nov 28 04:20:32 localhost systemd[1]: session-45.scope: Consumed 3.831s CPU time. Nov 28 04:20:32 localhost systemd-logind[763]: Session 45 logged out. Waiting for processes to exit. Nov 28 04:20:32 localhost systemd-logind[763]: Removed session 45. Nov 28 04:20:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42023 DF PROTO=TCP SPT=49360 DPT=9102 SEQ=3740878924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7163B0000000001030307) Nov 28 04:20:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42024 DF PROTO=TCP SPT=49360 DPT=9102 SEQ=3740878924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC71E3A0000000001030307) Nov 28 04:20:38 localhost sshd[142595]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:20:38 localhost systemd-logind[763]: New session 46 of user zuul. Nov 28 04:20:38 localhost systemd[1]: Started Session 46 of User zuul. Nov 28 04:20:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42025 DF PROTO=TCP SPT=49360 DPT=9102 SEQ=3740878924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC72DFA0000000001030307) Nov 28 04:20:39 localhost python3.9[142688]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:20:40 localhost python3.9[142784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:20:41 localhost python3.9[142838]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None Nov 28 04:20:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15236 DF PROTO=TCP SPT=44444 DPT=9100 SEQ=2418341388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC73F010000000001030307) Nov 28 04:20:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61001 DF PROTO=TCP SPT=37704 DPT=9101 SEQ=1634436327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC741D00000000001030307) Nov 28 04:20:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15237 DF PROTO=TCP SPT=44444 DPT=9100 SEQ=2418341388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC742FA0000000001030307) Nov 28 04:20:45 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61002 DF PROTO=TCP SPT=37704 DPT=9101 SEQ=1634436327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC745BA0000000001030307) Nov 28 04:20:45 localhost python3.9[142930]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:20:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15238 DF PROTO=TCP SPT=44444 DPT=9100 SEQ=2418341388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC74AFA0000000001030307) Nov 28 04:20:47 localhost python3.9[143023]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61003 DF PROTO=TCP SPT=37704 DPT=9101 SEQ=1634436327 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC74DBB0000000001030307) Nov 28 04:20:47 localhost python3.9[143115]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:48 localhost python3.9[143207]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated#012Core libraries or services have been updated since boot-up:#012 * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:20:49 localhost python3.9[143297]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:20:50 localhost python3.9[143387]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:20:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15239 DF PROTO=TCP SPT=44444 DPT=9100 SEQ=2418341388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC75ABB0000000001030307) Nov 28 04:20:50 localhost python3.9[143479]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:20:51 localhost systemd[1]: session-46.scope: Deactivated successfully. Nov 28 04:20:51 localhost systemd[1]: session-46.scope: Consumed 8.693s CPU time. Nov 28 04:20:51 localhost systemd-logind[763]: Session 46 logged out. Waiting for processes to exit. Nov 28 04:20:51 localhost systemd-logind[763]: Removed session 46. Nov 28 04:20:56 localhost sshd[143494]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:20:57 localhost systemd-logind[763]: New session 47 of user zuul. Nov 28 04:20:57 localhost systemd[1]: Started Session 47 of User zuul. Nov 28 04:20:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41174 DF PROTO=TCP SPT=60178 DPT=9105 SEQ=1447876135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC776030000000001030307) Nov 28 04:20:58 localhost python3.9[143587]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:20:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41175 DF PROTO=TCP SPT=60178 DPT=9105 SEQ=1447876135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC779FA0000000001030307) Nov 28 04:20:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15240 DF PROTO=TCP SPT=44444 DPT=9100 SEQ=2418341388 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC77AFA0000000001030307) Nov 28 04:21:00 localhost python3.9[143683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:01 localhost python3.9[143775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31280 DF PROTO=TCP SPT=43530 DPT=9882 SEQ=4129427140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7873A0000000001030307) Nov 28 04:21:02 localhost python3.9[143848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321661.155385-182-96870790585131/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:03 localhost python3.9[143940]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:04 localhost python3.9[144032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41177 DF PROTO=TCP SPT=60178 DPT=9105 SEQ=1447876135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC791BA0000000001030307) Nov 28 04:21:04 localhost python3.9[144105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321663.8510077-254-246269642521288/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:05 localhost python3.9[144197]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:06 localhost python3.9[144289]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:06 localhost python3.9[144362]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321665.6413279-324-186230067059145/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:07 localhost python3.9[144454]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:08 localhost python3.9[144546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:08 localhost python3.9[144619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321667.573376-396-230623098768221/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20592 DF PROTO=TCP SPT=52212 DPT=9102 SEQ=434276050 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7A33A0000000001030307) Nov 28 04:21:09 localhost python3.9[144711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:10 localhost python3.9[144803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:10 localhost python3.9[144876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321669.6399689-468-72087412119558/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:12 localhost python3.9[144968]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:12 localhost chronyd[134173]: Selected source 167.160.187.12 (pool.ntp.org) Nov 28 04:21:12 localhost python3.9[145060]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:13 localhost python3.9[145133]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321672.1535165-540-110020842775095/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41178 DF PROTO=TCP SPT=60178 DPT=9105 SEQ=1447876135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7B2FA0000000001030307) Nov 28 04:21:13 localhost python3.9[145225]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31282 DF PROTO=TCP SPT=43530 DPT=9882 SEQ=4129427140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7B6FA0000000001030307) Nov 28 04:21:14 localhost python3.9[145317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:15 localhost python3.9[145390]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321674.4826872-611-39281160695468/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:16 localhost python3.9[145482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38213 DF PROTO=TCP SPT=43314 DPT=9100 SEQ=3386024306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7C03B0000000001030307) Nov 28 04:21:16 localhost python3.9[145574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:17 localhost python3.9[145647]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321676.2881389-681-121224476243509/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=777d5f6763fde4fea484664803960858c2bba706 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:18 localhost systemd[1]: session-47.scope: Deactivated successfully. Nov 28 04:21:18 localhost systemd[1]: session-47.scope: Consumed 11.502s CPU time. Nov 28 04:21:18 localhost systemd-logind[763]: Session 47 logged out. Waiting for processes to exit. Nov 28 04:21:18 localhost systemd-logind[763]: Removed session 47. Nov 28 04:21:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38214 DF PROTO=TCP SPT=43314 DPT=9100 SEQ=3386024306 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7CFFA0000000001030307) Nov 28 04:21:24 localhost sshd[145662]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:21:24 localhost systemd-logind[763]: New session 48 of user zuul. Nov 28 04:21:24 localhost systemd[1]: Started Session 48 of User zuul. Nov 28 04:21:25 localhost python3.9[145757]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:26 localhost python3.9[145942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:27 localhost python3.9[146034]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321686.1530418-64-257268379395393/.source.conf _original_basename=ceph.conf follow=False checksum=e86499341cc75988f759ac10cb7bf332387204b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26687 DF PROTO=TCP SPT=49094 DPT=9105 SEQ=3249111599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7EB330000000001030307) Nov 28 04:21:28 localhost python3.9[146141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26688 DF PROTO=TCP SPT=49094 DPT=9105 SEQ=3249111599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7EF3A0000000001030307) Nov 28 04:21:28 localhost python3.9[146214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321687.5908878-64-261334273689399/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=98ffd20e3b9db1cae39a950d9da1f69e92796658 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38338 DF PROTO=TCP SPT=57676 DPT=9882 SEQ=1964313840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7F05C0000000001030307) Nov 28 04:21:29 localhost systemd[1]: session-48.scope: Deactivated successfully. Nov 28 04:21:29 localhost systemd[1]: session-48.scope: Consumed 2.186s CPU time. Nov 28 04:21:29 localhost systemd-logind[763]: Session 48 logged out. Waiting for processes to exit. Nov 28 04:21:29 localhost systemd-logind[763]: Removed session 48. Nov 28 04:21:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38340 DF PROTO=TCP SPT=57676 DPT=9882 SEQ=1964313840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC7FC7A0000000001030307) Nov 28 04:21:34 localhost sshd[146229]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:21:34 localhost systemd-logind[763]: New session 49 of user zuul. Nov 28 04:21:34 localhost systemd[1]: Started Session 49 of User zuul. Nov 28 04:21:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26690 DF PROTO=TCP SPT=49094 DPT=9105 SEQ=3249111599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC806FA0000000001030307) Nov 28 04:21:35 localhost python3.9[146322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:21:37 localhost python3.9[146418]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:38 localhost python3.9[146510]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:21:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64523 DF PROTO=TCP SPT=36618 DPT=9102 SEQ=1488426619 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8187A0000000001030307) Nov 28 04:21:39 localhost python3.9[146600]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:21:40 localhost python3.9[146692]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Nov 28 04:21:41 localhost python3.9[146784]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:21:42 localhost python3.9[146838]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:21:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26691 DF PROTO=TCP SPT=49094 DPT=9105 SEQ=3249111599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC826FB0000000001030307) Nov 28 04:21:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40144 DF PROTO=TCP SPT=34980 DPT=9101 SEQ=2118942241 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC82C300000000001030307) Nov 28 04:21:46 localhost python3.9[146932]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:21:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42715 DF PROTO=TCP SPT=50552 DPT=9100 SEQ=3352178026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8357A0000000001030307) Nov 28 04:21:48 localhost python3[147027]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012 rule:#012 proto: udp#012 dport: 4789#012- rule_name: 119 neutron geneve networks#012 rule:#012 proto: udp#012 dport: 6081#012 state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: OUTPUT#012 jump: NOTRACK#012 action: append#012 state: []#012- rule_name: 121 neutron geneve networks no conntrack#012 rule:#012 proto: udp#012 dport: 6081#012 table: raw#012 chain: PREROUTING#012 jump: NOTRACK#012 action: append#012 state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present Nov 28 04:21:49 localhost python3.9[147119]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42716 DF PROTO=TCP SPT=50552 DPT=9100 SEQ=3352178026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8453A0000000001030307) Nov 28 04:21:50 localhost python3.9[147211]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:51 localhost python3.9[147259]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:51 localhost python3.9[147351]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:52 localhost python3.9[147399]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.3a79_geu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:53 localhost python3.9[147491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:53 localhost python3.9[147539]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:54 localhost python3.9[147631]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:21:55 localhost python3[147724]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 28 04:21:55 localhost python3.9[147816]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:56 localhost python3.9[147891]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321715.437453-433-245295769080927/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:57 localhost python3.9[147983]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36276 DF PROTO=TCP SPT=56778 DPT=9105 SEQ=2406824212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC860630000000001030307) Nov 28 04:21:57 localhost python3.9[148058]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321716.819206-478-92526906791120/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:21:58 localhost python3.9[148150]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:21:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36277 DF PROTO=TCP SPT=56778 DPT=9105 SEQ=2406824212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8647A0000000001030307) Nov 28 04:21:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42717 DF PROTO=TCP SPT=50552 DPT=9100 SEQ=3352178026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC864FA0000000001030307) Nov 28 04:21:59 localhost python3.9[148225]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321718.0207698-524-172836122936403/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:00 localhost python3.9[148317]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:01 localhost python3.9[148392]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321720.0765269-568-163146176608401/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41180 DF PROTO=TCP SPT=60178 DPT=9105 SEQ=1447876135 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC870FA0000000001030307) Nov 28 04:22:03 localhost python3.9[148484]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:03 localhost python3.9[148559]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764321722.525155-614-78977186599096/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:04 localhost python3.9[148651]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36279 DF PROTO=TCP SPT=56778 DPT=9105 SEQ=2406824212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC87C3B0000000001030307) Nov 28 04:22:04 localhost python3.9[148743]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:05 localhost python3.9[148838]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:06 localhost python3.9[148931]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:07 localhost python3.9[149024]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:22:07 localhost python3.9[149118]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:08 localhost python3.9[149213]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43190 DF PROTO=TCP SPT=47466 DPT=9102 SEQ=683390314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC88DBA0000000001030307) Nov 28 04:22:09 localhost python3.9[149303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:22:10 localhost python3.9[149396]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005538515.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:28:f9:1a:af" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:10 localhost ovs-vsctl[149397]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005538515.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:28:f9:1a:af external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch Nov 28 04:22:11 localhost python3.9[149489]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:12 localhost python3.9[149582]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:22:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36280 DF PROTO=TCP SPT=56778 DPT=9105 SEQ=2406824212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC89CFA0000000001030307) Nov 28 04:22:13 localhost python3.9[149676]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:13 localhost python3.9[149768]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18751 DF PROTO=TCP SPT=36858 DPT=9882 SEQ=426720246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8A0FA0000000001030307) Nov 28 04:22:14 localhost python3.9[149816]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:14 localhost python3.9[149908]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:15 localhost python3.9[149956]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:15 localhost python3.9[150048]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:16 localhost python3.9[150140]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10332 DF PROTO=TCP SPT=33336 DPT=9100 SEQ=455937120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8AA7A0000000001030307) Nov 28 04:22:16 localhost python3.9[150188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:17 localhost python3.9[150280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:17 localhost python3.9[150328]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:18 localhost python3.9[150420]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:22:18 localhost systemd[1]: Reloading. Nov 28 04:22:18 localhost systemd-sysv-generator[150449]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:22:18 localhost systemd-rc-local-generator[150445]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:22:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:22:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10333 DF PROTO=TCP SPT=33336 DPT=9100 SEQ=455937120 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8BA3A0000000001030307) Nov 28 04:22:20 localhost python3.9[150550]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:21 localhost python3.9[150598]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:22 localhost python3.9[150691]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:23 localhost python3.9[150739]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:24 localhost python3.9[150831]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:22:24 localhost systemd[1]: Reloading. Nov 28 04:22:24 localhost systemd-rc-local-generator[150855]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:22:24 localhost systemd-sysv-generator[150859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:22:24 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:22:24 localhost systemd[1]: Starting Create netns directory... Nov 28 04:22:24 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 04:22:24 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 04:22:24 localhost systemd[1]: Finished Create netns directory. Nov 28 04:22:25 localhost python3.9[150967]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:26 localhost python3.9[151059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:26 localhost python3.9[151132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321745.6519449-1345-133025415248799/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:27 localhost python3.9[151224]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64274 DF PROTO=TCP SPT=35422 DPT=9105 SEQ=1586787104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8D5920000000001030307) Nov 28 04:22:28 localhost python3.9[151346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:22:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64275 DF PROTO=TCP SPT=35422 DPT=9105 SEQ=1586787104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8D9BA0000000001030307) Nov 28 04:22:28 localhost python3.9[151453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321747.7678537-1420-268989061163857/.source.json _original_basename=.hy5cctwr follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7166 DF PROTO=TCP SPT=45656 DPT=9882 SEQ=3468100896 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8DABC0000000001030307) Nov 28 04:22:29 localhost python3.9[151560]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26693 DF PROTO=TCP SPT=49094 DPT=9105 SEQ=3249111599 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8E4FA0000000001030307) Nov 28 04:22:32 localhost python3.9[151817]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False Nov 28 04:22:32 localhost python3.9[151909]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:22:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64277 DF PROTO=TCP SPT=35422 DPT=9105 SEQ=1586787104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC8F17A0000000001030307) Nov 28 04:22:34 localhost python3.9[152001]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 28 04:22:38 localhost python3[152120]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:22:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38576 DF PROTO=TCP SPT=57142 DPT=9102 SEQ=2612554662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC902BA0000000001030307) Nov 28 04:22:39 localhost python3[152120]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69",#012 "Digest": "sha256:7ab0ee81fdc9b162df9b50eb2e264c777d08f90975a442620ec940edabe300b2",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:7ab0ee81fdc9b162df9b50eb2e264c777d08f90975a442620ec940edabe300b2"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:43:38.999472418Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 345745352,#012 "VirtualSize": 345745352,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/d63efe17da859108a09d9b90626ba0c433787abe209cd4ac755f6ba2a5206671/diff:/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/d8443c9fdf039c2367e44e0edbe81c941f30f604c3f1eccc2fc81efb5a97a784/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/d8443c9fdf039c2367e44e0edbe81c941f30f604c3f1eccc2fc81efb5a97a784/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:41a433848ac42a81e513766649f77cfa09e37aae045bcbbb33be77f7cf86edc4",#012 "sha256:055d9012b48b3c8064accd40b6372c79c29fedd85061a710ada00677f88b1db9"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:37.752912815Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l Nov 28 04:22:39 localhost podman[152170]: 2025-11-28 09:22:39.286289199 +0000 UTC m=+0.088607518 container remove 9779adc100c83ac3ca149677b2b7e017b342ee203e42cfa7e38dbb3b27c3d164 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller) Nov 28 04:22:39 localhost python3[152120]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller Nov 28 04:22:39 localhost podman[152183]: Nov 28 04:22:39 localhost podman[152183]: 2025-11-28 09:22:39.369997121 +0000 UTC m=+0.069487103 container create 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:22:39 localhost podman[152183]: 2025-11-28 09:22:39.331580656 +0000 UTC m=+0.031070688 image pull quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Nov 28 04:22:39 localhost python3[152120]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified Nov 28 04:22:40 localhost python3.9[152311]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:22:41 localhost python3.9[152405]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:41 localhost python3.9[152451]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:22:42 localhost python3.9[152542]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764321761.5549982-1684-116866121389217/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:22:42 localhost python3.9[152588]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:22:42 localhost systemd[1]: Reloading. Nov 28 04:22:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64278 DF PROTO=TCP SPT=35422 DPT=9105 SEQ=1586787104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC910FA0000000001030307) Nov 28 04:22:42 localhost systemd-rc-local-generator[152607]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:22:42 localhost systemd-sysv-generator[152614]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:22:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:22:43 localhost python3.9[152670]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:22:43 localhost systemd[1]: Reloading. Nov 28 04:22:43 localhost systemd-rc-local-generator[152697]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:22:43 localhost systemd-sysv-generator[152702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:22:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:22:44 localhost systemd[1]: Starting ovn_controller container... Nov 28 04:22:44 localhost systemd[1]: Started libcrun container. Nov 28 04:22:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5f4753be574a6e4d1b818630bc8663d83b7be29d27e9a8539e5e7161ddb05a6/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 28 04:22:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57497 DF PROTO=TCP SPT=40360 DPT=9101 SEQ=1127572714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC916900000000001030307) Nov 28 04:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:22:44 localhost podman[152712]: 2025-11-28 09:22:44.224366321 +0000 UTC m=+0.135922216 container init 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:22:44 localhost ovn_controller[152726]: + sudo -E kolla_set_configs Nov 28 04:22:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:22:44 localhost podman[152712]: 2025-11-28 09:22:44.261492677 +0000 UTC m=+0.173048542 container start 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 04:22:44 localhost edpm-start-podman-container[152712]: ovn_controller Nov 28 04:22:44 localhost systemd[1]: Created slice User Slice of UID 0. Nov 28 04:22:44 localhost systemd[1]: Starting User Runtime Directory /run/user/0... Nov 28 04:22:44 localhost systemd[1]: Finished User Runtime Directory /run/user/0. Nov 28 04:22:44 localhost systemd[1]: Starting User Manager for UID 0... Nov 28 04:22:44 localhost podman[152733]: 2025-11-28 09:22:44.357878733 +0000 UTC m=+0.091462761 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS) Nov 28 04:22:44 localhost podman[152733]: 2025-11-28 09:22:44.447908292 +0000 UTC m=+0.181492320 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:22:44 localhost podman[152733]: unhealthy Nov 28 04:22:44 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:22:44 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Failed with result 'exit-code'. Nov 28 04:22:44 localhost edpm-start-podman-container[152711]: Creating additional drop-in dependency for "ovn_controller" (98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9) Nov 28 04:22:44 localhost systemd[1]: Reloading. Nov 28 04:22:44 localhost systemd[152755]: Queued start job for default target Main User Target. Nov 28 04:22:44 localhost systemd[152755]: Created slice User Application Slice. Nov 28 04:22:44 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation. Nov 28 04:22:44 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:22:44 localhost systemd[152755]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system). Nov 28 04:22:44 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:22:44 localhost systemd[152755]: Started Daily Cleanup of User's Temporary Directories. Nov 28 04:22:44 localhost systemd[152755]: Reached target Paths. Nov 28 04:22:44 localhost systemd[152755]: Reached target Timers. Nov 28 04:22:44 localhost systemd[152755]: Starting D-Bus User Message Bus Socket... Nov 28 04:22:44 localhost systemd[152755]: Starting Create User's Volatile Files and Directories... Nov 28 04:22:44 localhost systemd[152755]: Listening on D-Bus User Message Bus Socket. Nov 28 04:22:44 localhost systemd[152755]: Reached target Sockets. Nov 28 04:22:44 localhost systemd[152755]: Finished Create User's Volatile Files and Directories. Nov 28 04:22:44 localhost systemd[152755]: Reached target Basic System. Nov 28 04:22:44 localhost systemd[152755]: Reached target Main User Target. Nov 28 04:22:44 localhost systemd[152755]: Startup finished in 158ms. Nov 28 04:22:44 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:22:44 localhost systemd-rc-local-generator[152811]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:22:44 localhost systemd-sysv-generator[152815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:22:44 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:22:44 localhost systemd[1]: tmp-crun.jO1Edv.mount: Deactivated successfully. Nov 28 04:22:44 localhost systemd[1]: Started User Manager for UID 0. Nov 28 04:22:44 localhost systemd[1]: Started ovn_controller container. Nov 28 04:22:44 localhost systemd[1]: Started Session c11 of User root. Nov 28 04:22:44 localhost ovn_controller[152726]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:22:44 localhost ovn_controller[152726]: INFO:__main__:Validating config file Nov 28 04:22:44 localhost ovn_controller[152726]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:22:44 localhost ovn_controller[152726]: INFO:__main__:Writing out command to execute Nov 28 04:22:44 localhost systemd[1]: session-c11.scope: Deactivated successfully. Nov 28 04:22:44 localhost ovn_controller[152726]: ++ cat /run_command Nov 28 04:22:44 localhost ovn_controller[152726]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Nov 28 04:22:44 localhost ovn_controller[152726]: + ARGS= Nov 28 04:22:44 localhost ovn_controller[152726]: + sudo kolla_copy_cacerts Nov 28 04:22:44 localhost systemd[1]: Started Session c12 of User root. Nov 28 04:22:44 localhost systemd[1]: session-c12.scope: Deactivated successfully. Nov 28 04:22:44 localhost ovn_controller[152726]: + [[ ! -n '' ]] Nov 28 04:22:44 localhost ovn_controller[152726]: + . kolla_extend_start Nov 28 04:22:44 localhost ovn_controller[152726]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\''' Nov 28 04:22:44 localhost ovn_controller[152726]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock ' Nov 28 04:22:44 localhost ovn_controller[152726]: + umask 0022 Nov 28 04:22:44 localhost ovn_controller[152726]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8] Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00004|main|INFO|OVS IDL reconnected, force recompute. Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00006|main|INFO|OVNSB IDL reconnected, force recompute. Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00011|features|INFO|OVS Feature: ct_flush, state: supported Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00013|main|INFO|OVS feature set changed, force recompute. Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms) Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute. Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00021|main|INFO|OVS feature set changed, force recompute. Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4 Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting... Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 28 04:22:44 localhost ovn_controller[152726]: 2025-11-28T09:22:44Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected Nov 28 04:22:45 localhost python3.9[152921]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:45 localhost ovs-vsctl[152922]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload Nov 28 04:22:46 localhost python3.9[153014]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:46 localhost ovs-vsctl[153016]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids Nov 28 04:22:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34378 DF PROTO=TCP SPT=45790 DPT=9100 SEQ=3977660164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC91FBA0000000001030307) Nov 28 04:22:48 localhost python3.9[153109]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:22:48 localhost ovs-vsctl[153110]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options Nov 28 04:22:48 localhost systemd[1]: session-49.scope: Deactivated successfully. Nov 28 04:22:48 localhost systemd[1]: session-49.scope: Consumed 40.691s CPU time. Nov 28 04:22:48 localhost systemd-logind[763]: Session 49 logged out. Waiting for processes to exit. Nov 28 04:22:48 localhost systemd-logind[763]: Removed session 49. Nov 28 04:22:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34379 DF PROTO=TCP SPT=45790 DPT=9100 SEQ=3977660164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC92F7B0000000001030307) Nov 28 04:22:54 localhost sshd[153125]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:22:55 localhost systemd-logind[763]: New session 51 of user zuul. Nov 28 04:22:55 localhost systemd[1]: Started Session 51 of User zuul. Nov 28 04:22:55 localhost systemd[1]: Stopping User Manager for UID 0... Nov 28 04:22:55 localhost systemd[152755]: Activating special unit Exit the Session... Nov 28 04:22:55 localhost systemd[152755]: Stopped target Main User Target. Nov 28 04:22:55 localhost systemd[152755]: Stopped target Basic System. Nov 28 04:22:55 localhost systemd[152755]: Stopped target Paths. Nov 28 04:22:55 localhost systemd[152755]: Stopped target Sockets. Nov 28 04:22:55 localhost systemd[152755]: Stopped target Timers. Nov 28 04:22:55 localhost systemd[152755]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 04:22:55 localhost systemd[152755]: Closed D-Bus User Message Bus Socket. Nov 28 04:22:55 localhost systemd[152755]: Stopped Create User's Volatile Files and Directories. Nov 28 04:22:55 localhost systemd[152755]: Removed slice User Application Slice. Nov 28 04:22:55 localhost systemd[152755]: Reached target Shutdown. Nov 28 04:22:55 localhost systemd[152755]: Finished Exit the Session. Nov 28 04:22:55 localhost systemd[152755]: Reached target Exit the Session. Nov 28 04:22:55 localhost systemd[1]: user@0.service: Deactivated successfully. Nov 28 04:22:55 localhost systemd[1]: Stopped User Manager for UID 0. Nov 28 04:22:55 localhost systemd[1]: Stopping User Runtime Directory /run/user/0... Nov 28 04:22:55 localhost systemd[1]: run-user-0.mount: Deactivated successfully. Nov 28 04:22:55 localhost systemd[1]: user-runtime-dir@0.service: Deactivated successfully. Nov 28 04:22:55 localhost systemd[1]: Stopped User Runtime Directory /run/user/0. Nov 28 04:22:55 localhost systemd[1]: Removed slice User Slice of UID 0. Nov 28 04:22:56 localhost python3.9[153219]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:22:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54857 DF PROTO=TCP SPT=59168 DPT=9105 SEQ=4196639421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC94AC30000000001030307) Nov 28 04:22:57 localhost python3.9[153315]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:58 localhost python3.9[153407]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:22:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54858 DF PROTO=TCP SPT=59168 DPT=9105 SEQ=4196639421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC94EBB0000000001030307) Nov 28 04:22:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34380 DF PROTO=TCP SPT=45790 DPT=9100 SEQ=3977660164 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC94EFB0000000001030307) Nov 28 04:22:59 localhost python3.9[153499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:00 localhost python3.9[153591]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:00 localhost python3.9[153683]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36282 DF PROTO=TCP SPT=56778 DPT=9105 SEQ=2406824212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC95AFA0000000001030307) Nov 28 04:23:01 localhost python3.9[153773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:23:02 localhost sshd[153820]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:23:02 localhost python3.9[153866]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False Nov 28 04:23:03 localhost python3.9[153957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:04 localhost python3.9[154030]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321782.9724538-221-64914217819856/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54860 DF PROTO=TCP SPT=59168 DPT=9105 SEQ=4196639421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9667A0000000001030307) Nov 28 04:23:04 localhost python3.9[154120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:05 localhost python3.9[154193]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321784.3687816-265-149231866377358/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:06 localhost python3.9[154285]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:23:07 localhost python3.9[154339]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:23:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18546 DF PROTO=TCP SPT=57534 DPT=9102 SEQ=2894442272 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC977FA0000000001030307) Nov 28 04:23:11 localhost python3.9[154433]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:23:11 localhost python3.9[154526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:12 localhost python3.9[154597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321791.4755554-376-26151904617266/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54861 DF PROTO=TCP SPT=59168 DPT=9105 SEQ=4196639421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC986FA0000000001030307) Nov 28 04:23:13 localhost python3.9[154687]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:13 localhost python3.9[154758]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321792.7528632-376-173464192714396/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15772 DF PROTO=TCP SPT=34894 DPT=9101 SEQ=2655486233 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC98BC10000000001030307) Nov 28 04:23:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:23:14 localhost python3.9[154848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:14 localhost systemd[1]: tmp-crun.Hq7D4S.mount: Deactivated successfully. Nov 28 04:23:14 localhost podman[154849]: 2025-11-28 09:23:14.986515961 +0000 UTC m=+0.091489883 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 28 04:23:14 localhost ovn_controller[152726]: 2025-11-28T09:23:14Z|00023|memory|INFO|13040 kB peak resident set size after 30.1 seconds Nov 28 04:23:14 localhost ovn_controller[152726]: 2025-11-28T09:23:14Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3 Nov 28 04:23:15 localhost podman[154849]: 2025-11-28 09:23:15.027425128 +0000 UTC m=+0.132399030 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 04:23:15 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:23:15 localhost python3.9[154944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321794.4773312-508-223363253615347/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:16 localhost python3.9[155034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32106 DF PROTO=TCP SPT=41286 DPT=9100 SEQ=3622792034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC994FA0000000001030307) Nov 28 04:23:16 localhost python3.9[155105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321795.6301637-508-148679564908690/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:17 localhost python3.9[155195]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:23:17 localhost python3.9[155289]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:19 localhost python3.9[155381]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:19 localhost python3.9[155429]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:20 localhost python3.9[155521]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32107 DF PROTO=TCP SPT=41286 DPT=9100 SEQ=3622792034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9A4BA0000000001030307) Nov 28 04:23:20 localhost python3.9[155569]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:21 localhost python3.9[155661]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:22 localhost python3.9[155753]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:22 localhost python3.9[155801]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:23 localhost python3.9[155893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:24 localhost python3.9[155941]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:24 localhost python3.9[156033]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:23:24 localhost systemd[1]: Reloading. Nov 28 04:23:25 localhost systemd-rc-local-generator[156061]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:23:25 localhost systemd-sysv-generator[156064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:23:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:23:26 localhost python3.9[156163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:26 localhost python3.9[156211]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:27 localhost python3.9[156303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:27 localhost python3.9[156351]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1569 DF PROTO=TCP SPT=38762 DPT=9105 SEQ=3189010300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9BFF30000000001030307) Nov 28 04:23:28 localhost python3.9[156443]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:23:28 localhost systemd[1]: Reloading. Nov 28 04:23:28 localhost systemd-sysv-generator[156473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:23:28 localhost systemd-rc-local-generator[156468]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:23:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:23:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1570 DF PROTO=TCP SPT=38762 DPT=9105 SEQ=3189010300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9C3FA0000000001030307) Nov 28 04:23:28 localhost systemd[1]: Starting Create netns directory... Nov 28 04:23:28 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 04:23:28 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 04:23:28 localhost systemd[1]: Finished Create netns directory. Nov 28 04:23:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32108 DF PROTO=TCP SPT=41286 DPT=9100 SEQ=3622792034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9C4FA0000000001030307) Nov 28 04:23:29 localhost python3.9[156609]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:30 localhost podman[156757]: 2025-11-28 09:23:30.150592124 +0000 UTC m=+0.091022669 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.openshift.expose-services=, release=553, CEPH_POINT_RELEASE=, vcs-type=git, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:23:30 localhost podman[156757]: 2025-11-28 09:23:30.261552328 +0000 UTC m=+0.201982873 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, release=553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True) Nov 28 04:23:30 localhost python3.9[156783]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:30 localhost python3.9[156929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764321809.846758-961-56186497430519/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24346 DF PROTO=TCP SPT=37990 DPT=9882 SEQ=4210979322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9D13B0000000001030307) Nov 28 04:23:32 localhost python3.9[157080]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:23:33 localhost python3.9[157172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:23:34 localhost python3.9[157247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764321813.0384083-1036-165777245459823/.source.json _original_basename=.lrkd_gpb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1572 DF PROTO=TCP SPT=38762 DPT=9105 SEQ=3189010300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9DBBA0000000001030307) Nov 28 04:23:34 localhost python3.9[157339]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:37 localhost python3.9[157596]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False Nov 28 04:23:37 localhost python3.9[157688]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:23:38 localhost python3.9[157780]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 28 04:23:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48921 DF PROTO=TCP SPT=44214 DPT=9102 SEQ=1344649797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9ED3A0000000001030307) Nov 28 04:23:42 localhost python3[157897]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:23:43 localhost python3[157897]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071",#012 "Digest": "sha256:2b8255d3a22035616e569dbe22862a2560e15cdaefedae0059a354d558788e1e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:2b8255d3a22035616e569dbe22862a2560e15cdaefedae0059a354d558788e1e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:34:14.989876147Z",#012 "Config": {#012 "User": "neutron",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 784145152,#012 "VirtualSize": 784145152,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/f04f6aa8018da724c9daa5ca37db7cd13477323f1b725eec5dac97862d883048/diff:/var/lib/containers/storage/overlay/47afe78ba3ac18f156703d7ad9e4be64941a9d1bd472a4c2a59f4f2c3531ee35/diff:/var/lib/containers/storage/overlay/f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a/diff:/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/b574f97f279779c52df37c61d993141d596fdb6544fa700fbddd8f35f27a4d3b/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/b574f97f279779c52df37c61d993141d596fdb6544fa700fbddd8f35f27a4d3b/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:c136b33417f134a3b932677bcf7a2df089c29f20eca250129eafd2132d4708bb",#012 "sha256:bc63f71478d9d90db803b468b28e5d9e0268adbace958b608ab10bd0819798bd",#012 "sha256:3277562ff4450bdcd859dd0b0be874b10dd6f3502be711d42aab9ff44a85cf28",#012 "sha256:982219792b3d83fa04ae12d0161dd3b982e7e3ed68293e6c876d50161b73746b"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "neutron",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf Nov 28 04:23:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1573 DF PROTO=TCP SPT=38762 DPT=9105 SEQ=3189010300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AC9FCFA0000000001030307) Nov 28 04:23:43 localhost podman[157946]: 2025-11-28 09:23:43.241997241 +0000 UTC m=+0.083076403 container remove e1c70e5c2d14d8fb586d5c91ffbea685245645bc934766806b986b70d0aebd47 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '08c21dad54d1ba598c6e2fae6b853aba'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1) Nov 28 04:23:43 localhost python3[157897]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent Nov 28 04:23:43 localhost podman[157960]: Nov 28 04:23:43 localhost podman[157960]: 2025-11-28 09:23:43.349268433 +0000 UTC m=+0.087902164 container create b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:23:43 localhost podman[157960]: 2025-11-28 09:23:43.307328519 +0000 UTC m=+0.045962270 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 28 04:23:43 localhost python3[157897]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 28 04:23:44 localhost python3.9[158086]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:23:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32614 DF PROTO=TCP SPT=36358 DPT=9101 SEQ=904541868 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA00F00000000001030307) Nov 28 04:23:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:23:45 localhost podman[158181]: 2025-11-28 09:23:45.291254256 +0000 UTC m=+0.082951948 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Nov 28 04:23:45 localhost podman[158181]: 2025-11-28 09:23:45.330582463 +0000 UTC m=+0.122280155 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 28 04:23:45 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:23:45 localhost python3.9[158180]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:45 localhost python3.9[158250]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:23:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16557 DF PROTO=TCP SPT=55620 DPT=9100 SEQ=1599959550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA0A3A0000000001030307) Nov 28 04:23:47 localhost python3.9[158341]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764321825.9085803-1300-208601423066950/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:23:47 localhost python3.9[158387]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:23:47 localhost systemd[1]: Reloading. Nov 28 04:23:47 localhost systemd-sysv-generator[158417]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:23:47 localhost systemd-rc-local-generator[158411]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:23:47 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:23:48 localhost python3.9[158469]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:23:48 localhost systemd[1]: Reloading. Nov 28 04:23:48 localhost systemd-rc-local-generator[158494]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:23:48 localhost systemd-sysv-generator[158500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:23:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:23:48 localhost systemd[1]: Starting ovn_metadata_agent container... Nov 28 04:23:49 localhost systemd[1]: Started libcrun container. Nov 28 04:23:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf443e7ac417e059a5e66d88527948f9e0d9e4436d26552658ad1f69652f989/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 28 04:23:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf443e7ac417e059a5e66d88527948f9e0d9e4436d26552658ad1f69652f989/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:23:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:23:49 localhost podman[158511]: 2025-11-28 09:23:49.153834657 +0000 UTC m=+0.158990665 container init b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + sudo -E kolla_set_configs Nov 28 04:23:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:23:49 localhost podman[158511]: 2025-11-28 09:23:49.190706391 +0000 UTC m=+0.195862409 container start b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 28 04:23:49 localhost edpm-start-podman-container[158511]: ovn_metadata_agent Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Validating config file Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Copying service configuration files Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Writing out command to execute Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: ++ cat /run_command Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + CMD=neutron-ovn-metadata-agent Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + ARGS= Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + sudo kolla_copy_cacerts Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + [[ ! -n '' ]] Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + . kolla_extend_start Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\''' Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: Running command: 'neutron-ovn-metadata-agent' Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + umask 0022 Nov 28 04:23:49 localhost ovn_metadata_agent[158525]: + exec neutron-ovn-metadata-agent Nov 28 04:23:49 localhost podman[158533]: 2025-11-28 09:23:49.383418814 +0000 UTC m=+0.184642443 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:23:49 localhost edpm-start-podman-container[158510]: Creating additional drop-in dependency for "ovn_metadata_agent" (b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c) Nov 28 04:23:49 localhost systemd[1]: Reloading. Nov 28 04:23:49 localhost podman[158533]: 2025-11-28 09:23:49.414473244 +0000 UTC m=+0.215696923 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent) Nov 28 04:23:49 localhost systemd-sysv-generator[158602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:23:49 localhost systemd-rc-local-generator[158599]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:23:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:23:49 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:23:49 localhost systemd[1]: Started ovn_metadata_agent container. Nov 28 04:23:50 localhost systemd[1]: session-51.scope: Deactivated successfully. Nov 28 04:23:50 localhost systemd[1]: session-51.scope: Consumed 31.676s CPU time. Nov 28 04:23:50 localhost systemd-logind[763]: Session 51 logged out. Waiting for processes to exit. Nov 28 04:23:50 localhost systemd-logind[763]: Removed session 51. Nov 28 04:23:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16558 DF PROTO=TCP SPT=55620 DPT=9100 SEQ=1599959550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA19FA0000000001030307) Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.762 158530 INFO neutron.common.config [-] Logging enabled!#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.762 158530 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.762 158530 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.763 158530 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.764 158530 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.765 158530 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.766 158530 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.767 158530 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.768 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.769 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.770 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.771 158530 DEBUG neutron.agent.ovn.metadata_agent [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.772 158530 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.773 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.774 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.775 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.776 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.777 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.778 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.779 158530 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.780 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.781 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.782 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.783 158530 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.784 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.785 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.786 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.787 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.788 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.789 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.790 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.791 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.792 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.793 158530 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.793 158530 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.800 158530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.801 158530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.801 158530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.801 158530 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.801 158530 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.826 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 62c03cad-89c1-4fd7-973b-8f2a608c71f1 (UUID: 62c03cad-89c1-4fd7-973b-8f2a608c71f1) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.849 158530 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.850 158530 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.850 158530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.850 158530 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.853 158530 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.858 158530 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.872 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '62c03cad-89c1-4fd7-973b-8f2a608c71f1'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[], external_ids={'neutron:ovn-metadata-id': '2d28d085-9d5e-537f-ab04-258862acbcc7', 'neutron:ovn-metadata-sb-cfg': '1'}, name=62c03cad-89c1-4fd7-973b-8f2a608c71f1, nb_cfg_timestamp=1764321773775, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.874 158530 DEBUG neutron_lib.callbacks.manager [-] Subscribe: > process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.875 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.875 158530 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.875 158530 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.876 158530 INFO oslo_service.service [-] Starting 1 workers#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.879 158530 DEBUG oslo_service.service [-] Started child 158625 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.883 158530 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpvqp2nwoj/privsep.sock']#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.883 158625 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-240285'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.905 158625 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.906 158625 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.906 158625 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.910 158625 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.911 158625 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected#033[00m Nov 28 04:23:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:50.924 158625 INFO eventlet.wsgi.server [-] (158625) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.482 158530 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.483 158530 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvqp2nwoj/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.377 158630 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.382 158630 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.386 158630 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.386 158630 INFO oslo.privsep.daemon [-] privsep daemon running as pid 158630#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.485 158630 DEBUG oslo.privsep.daemon [-] privsep: reply[20642f77-97c6-4659-998d-442027c0c63c]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.928 158630 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.928 158630 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:23:51 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:51.928 158630 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.358 158630 DEBUG oslo.privsep.daemon [-] privsep: reply[335438eb-9859-4825-8a15-8f73c62cde92]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.362 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, column=external_ids, values=({'neutron:ovn-metadata-id': '2d28d085-9d5e-537f-ab04-258862acbcc7'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.363 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.363 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.379 158530 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.379 158530 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.379 158530 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.379 158530 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.380 158530 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.380 158530 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.380 158530 DEBUG oslo_service.service [-] agent_down_time = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.380 158530 DEBUG oslo_service.service [-] allow_bulk = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.380 158530 DEBUG oslo_service.service [-] api_extensions_path = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.380 158530 DEBUG oslo_service.service [-] api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.381 158530 DEBUG oslo_service.service [-] api_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.381 158530 DEBUG oslo_service.service [-] auth_ca_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.381 158530 DEBUG oslo_service.service [-] auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.381 158530 DEBUG oslo_service.service [-] backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.381 158530 DEBUG oslo_service.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.381 158530 DEBUG oslo_service.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] bind_port = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] config_dir = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] config_file = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] control_exchange = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.382 158530 DEBUG oslo_service.service [-] core_plugin = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.383 158530 DEBUG oslo_service.service [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.383 158530 DEBUG oslo_service.service [-] default_availability_zones = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.383 158530 DEBUG oslo_service.service [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.383 158530 DEBUG oslo_service.service [-] dhcp_agent_notification = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.383 158530 DEBUG oslo_service.service [-] dhcp_lease_duration = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.383 158530 DEBUG oslo_service.service [-] dhcp_load_type = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.384 158530 DEBUG oslo_service.service [-] dns_domain = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.384 158530 DEBUG oslo_service.service [-] enable_new_agents = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.384 158530 DEBUG oslo_service.service [-] enable_traditional_dhcp = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.384 158530 DEBUG oslo_service.service [-] external_dns_driver = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.384 158530 DEBUG oslo_service.service [-] external_pids = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.384 158530 DEBUG oslo_service.service [-] filter_validation = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.385 158530 DEBUG oslo_service.service [-] global_physnet_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.385 158530 DEBUG oslo_service.service [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.385 158530 DEBUG oslo_service.service [-] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.385 158530 DEBUG oslo_service.service [-] http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.385 158530 DEBUG oslo_service.service [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.385 158530 DEBUG oslo_service.service [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] ipam_driver = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] ipv6_pd_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.386 158530 DEBUG oslo_service.service [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.387 158530 DEBUG oslo_service.service [-] max_dns_nameservers = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] max_subnet_host_routes = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] metadata_backlog = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] metadata_proxy_group = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.388 158530 DEBUG oslo_service.service [-] metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.389 158530 DEBUG oslo_service.service [-] metadata_proxy_socket = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.389 158530 DEBUG oslo_service.service [-] metadata_proxy_socket_mode = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.389 158530 DEBUG oslo_service.service [-] metadata_proxy_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.389 158530 DEBUG oslo_service.service [-] metadata_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.389 158530 DEBUG oslo_service.service [-] network_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.389 158530 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] nova_client_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] nova_client_priv_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] nova_metadata_host = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] nova_metadata_insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] nova_metadata_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.390 158530 DEBUG oslo_service.service [-] nova_metadata_protocol = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] pagination_max_limit = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] periodic_fuzzy_delay = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] periodic_interval = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.391 158530 DEBUG oslo_service.service [-] retry_until_window = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] rpc_resources_processing_step = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] rpc_response_max_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] rpc_state_report_workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] rpc_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] send_events_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] service_plugins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.392 158530 DEBUG oslo_service.service [-] setproctitle = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.393 158530 DEBUG oslo_service.service [-] state_path = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.393 158530 DEBUG oslo_service.service [-] syslog_log_facility = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.393 158530 DEBUG oslo_service.service [-] tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.393 158530 DEBUG oslo_service.service [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.393 158530 DEBUG oslo_service.service [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.394 158530 DEBUG oslo_service.service [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.394 158530 DEBUG oslo_service.service [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.394 158530 DEBUG oslo_service.service [-] use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.394 158530 DEBUG oslo_service.service [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.394 158530 DEBUG oslo_service.service [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] vlan_transparent = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] wsgi_default_pool_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] wsgi_keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] wsgi_server_debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.395 158530 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.396 158530 DEBUG oslo_service.service [-] oslo_concurrency.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.396 158530 DEBUG oslo_service.service [-] profiler.connection_string = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.396 158530 DEBUG oslo_service.service [-] profiler.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.396 158530 DEBUG oslo_service.service [-] profiler.es_doc_type = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.es_scroll_size = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.es_scroll_time = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.filter_error_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.hmac_keys = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.socket_timeout = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.397 158530 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.398 158530 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.398 158530 DEBUG oslo_service.service [-] oslo_policy.enforce_scope = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.398 158530 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.398 158530 DEBUG oslo_service.service [-] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.398 158530 DEBUG oslo_service.service [-] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.398 158530 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.399 158530 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.400 158530 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.400 158530 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.400 158530 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.400 158530 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.400 158530 DEBUG oslo_service.service [-] privsep.capabilities = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.400 158530 DEBUG oslo_service.service [-] privsep.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep_dhcp_release.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.401 158530 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_dhcp_release.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.402 158530 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_namespace.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_namespace.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.403 158530 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.404 158530 DEBUG oslo_service.service [-] privsep_namespace.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.404 158530 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.404 158530 DEBUG oslo_service.service [-] privsep_conntrack.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.404 158530 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.404 158530 DEBUG oslo_service.service [-] privsep_conntrack.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.404 158530 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.405 158530 DEBUG oslo_service.service [-] privsep_conntrack.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.405 158530 DEBUG oslo_service.service [-] privsep_link.capabilities = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.405 158530 DEBUG oslo_service.service [-] privsep_link.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.405 158530 DEBUG oslo_service.service [-] privsep_link.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.405 158530 DEBUG oslo_service.service [-] privsep_link.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.405 158530 DEBUG oslo_service.service [-] privsep_link.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.406 158530 DEBUG oslo_service.service [-] privsep_link.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.406 158530 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.406 158530 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.406 158530 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.406 158530 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.406 158530 DEBUG oslo_service.service [-] AGENT.kill_scripts_path = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.407 158530 DEBUG oslo_service.service [-] AGENT.root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.407 158530 DEBUG oslo_service.service [-] AGENT.root_helper_daemon = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.407 158530 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.407 158530 DEBUG oslo_service.service [-] AGENT.use_random_fully = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.407 158530 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.407 158530 DEBUG oslo_service.service [-] QUOTAS.default_quota = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.quota_network = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.quota_port = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.quota_security_group = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.quota_subnet = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.408 158530 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.409 158530 DEBUG oslo_service.service [-] nova.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.409 158530 DEBUG oslo_service.service [-] nova.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.409 158530 DEBUG oslo_service.service [-] nova.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.409 158530 DEBUG oslo_service.service [-] nova.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.409 158530 DEBUG oslo_service.service [-] nova.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.409 158530 DEBUG oslo_service.service [-] nova.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] nova.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] nova.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] nova.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] nova.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] nova.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.410 158530 DEBUG oslo_service.service [-] placement.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.endpoint_type = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.411 158530 DEBUG oslo_service.service [-] placement.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.412 158530 DEBUG oslo_service.service [-] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.enable_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.413 158530 DEBUG oslo_service.service [-] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.414 158530 DEBUG oslo_service.service [-] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.414 158530 DEBUG oslo_service.service [-] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.414 158530 DEBUG oslo_service.service [-] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.414 158530 DEBUG oslo_service.service [-] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.414 158530 DEBUG oslo_service.service [-] ironic.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.414 158530 DEBUG oslo_service.service [-] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] ironic.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] cli_script.dry_run = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.415 158530 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.416 158530 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.416 158530 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.416 158530 DEBUG oslo_service.service [-] ovn.dns_servers = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.416 158530 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.416 158530 DEBUG oslo_service.service [-] ovn.neutron_sync_mode = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.416 158530 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_l3_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.417 158530 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovn_nb_connection = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovn_sb_connection = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.418 158530 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] ovn.ovsdb_log_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] ovn.vhost_sock_dir = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] ovn.vif_type = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.419 158530 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.420 158530 DEBUG oslo_service.service [-] OVS.ovsdb_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.420 158530 DEBUG oslo_service.service [-] ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.420 158530 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.420 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.420 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.420 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.421 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.421 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.421 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.421 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.421 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.421 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.422 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.423 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.424 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.425 158530 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.425 158530 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.425 158530 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.425 158530 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.425 158530 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:23:52 localhost ovn_metadata_agent[158525]: 2025-11-28 09:23:52.425 158530 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 28 04:23:56 localhost sshd[158635]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:23:57 localhost systemd-logind[763]: New session 52 of user zuul. Nov 28 04:23:57 localhost systemd[1]: Started Session 52 of User zuul. Nov 28 04:23:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37491 DF PROTO=TCP SPT=36594 DPT=9105 SEQ=2108030151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA35230000000001030307) Nov 28 04:23:57 localhost python3.9[158728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:23:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37492 DF PROTO=TCP SPT=36594 DPT=9105 SEQ=2108030151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA393A0000000001030307) Nov 28 04:23:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41753 DF PROTO=TCP SPT=57798 DPT=9882 SEQ=3561430897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA3A4C0000000001030307) Nov 28 04:24:00 localhost python3.9[158824]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:01 localhost python3.9[158929]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54863 DF PROTO=TCP SPT=59168 DPT=9105 SEQ=4196639421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA44FA0000000001030307) Nov 28 04:24:01 localhost systemd[1]: libpod-f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b.scope: Deactivated successfully. Nov 28 04:24:01 localhost podman[158930]: 2025-11-28 09:24:01.657112113 +0000 UTC m=+0.075663514 container died f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, architecture=x86_64) Nov 28 04:24:01 localhost systemd[1]: tmp-crun.HRXK5v.mount: Deactivated successfully. Nov 28 04:24:01 localhost podman[158930]: 2025-11-28 09:24:01.696523543 +0000 UTC m=+0.115074844 container cleanup f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt) Nov 28 04:24:01 localhost podman[158945]: 2025-11-28 09:24:01.723806897 +0000 UTC m=+0.056810093 container remove f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, tcib_managed=true, url=https://www.redhat.com, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1) Nov 28 04:24:01 localhost systemd[1]: libpod-conmon-f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b.scope: Deactivated successfully. Nov 28 04:24:02 localhost systemd[1]: var-lib-containers-storage-overlay-6d65302dd3a585cea223ca3e05b9a858698ed3b54cf3bbf51971fe5feba8f16c-merged.mount: Deactivated successfully. Nov 28 04:24:02 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f207e5b37e3f4ec55a88edcf4dbcbe5cbbc20fb4f3557998c461a11b61b3019b-userdata-shm.mount: Deactivated successfully. Nov 28 04:24:02 localhost python3.9[159051]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:24:02 localhost systemd[1]: Reloading. Nov 28 04:24:03 localhost systemd-sysv-generator[159082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:24:03 localhost systemd-rc-local-generator[159076]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:24:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:24:04 localhost python3.9[159177]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:24:04 localhost network[159194]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:24:04 localhost network[159195]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:24:04 localhost network[159196]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:24:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37494 DF PROTO=TCP SPT=36594 DPT=9105 SEQ=2108030151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA50FA0000000001030307) Nov 28 04:24:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:24:08 localhost python3.9[159398]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:08 localhost systemd[1]: Reloading. Nov 28 04:24:08 localhost systemd-sysv-generator[159431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:24:08 localhost systemd-rc-local-generator[159425]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:24:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:24:08 localhost systemd[1]: Stopped target tripleo_nova_libvirt.target. Nov 28 04:24:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40218 DF PROTO=TCP SPT=35690 DPT=9102 SEQ=1147369062 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA627A0000000001030307) Nov 28 04:24:10 localhost python3.9[159530]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:11 localhost python3.9[159623]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:12 localhost python3.9[159716]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37495 DF PROTO=TCP SPT=36594 DPT=9105 SEQ=2108030151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA70FA0000000001030307) Nov 28 04:24:13 localhost python3.9[159809]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42010 DF PROTO=TCP SPT=53020 DPT=9101 SEQ=2759722639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA76210000000001030307) Nov 28 04:24:14 localhost python3.9[159902]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:14 localhost python3.9[159995]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:24:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:24:15 localhost systemd[1]: tmp-crun.ARinvB.mount: Deactivated successfully. Nov 28 04:24:15 localhost podman[160011]: 2025-11-28 09:24:15.983405951 +0000 UTC m=+0.091234095 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:24:16 localhost podman[160011]: 2025-11-28 09:24:16.024548869 +0000 UTC m=+0.132377013 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:24:16 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:24:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27152 DF PROTO=TCP SPT=48020 DPT=9100 SEQ=1198390150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA7F7A0000000001030307) Nov 28 04:24:17 localhost python3.9[160112]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:17 localhost python3.9[160204]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:18 localhost python3.9[160296]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:19 localhost python3.9[160388]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:19 localhost python3.9[160480]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:24:19 localhost podman[160553]: 2025-11-28 09:24:19.945038218 +0000 UTC m=+0.054452444 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:24:19 localhost podman[160553]: 2025-11-28 09:24:19.947525981 +0000 UTC m=+0.056940207 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:24:19 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:24:20 localhost python3.9[160590]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27153 DF PROTO=TCP SPT=48020 DPT=9100 SEQ=1198390150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACA8F3B0000000001030307) Nov 28 04:24:20 localhost python3.9[160682]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:21 localhost python3.9[160774]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:22 localhost python3.9[160866]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:22 localhost python3.9[160958]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:23 localhost python3.9[161050]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:23 localhost python3.9[161142]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:24 localhost python3.9[161234]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:25 localhost python3.9[161326]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:24:26 localhost python3.9[161418]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:26 localhost python3.9[161510]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:24:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48537 DF PROTO=TCP SPT=44100 DPT=9105 SEQ=2256137115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAAA530000000001030307) Nov 28 04:24:27 localhost python3.9[161602]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:24:27 localhost systemd[1]: Reloading. Nov 28 04:24:27 localhost systemd-rc-local-generator[161630]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:24:27 localhost systemd-sysv-generator[161633]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:24:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:24:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48538 DF PROTO=TCP SPT=44100 DPT=9105 SEQ=2256137115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAAE7A0000000001030307) Nov 28 04:24:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27154 DF PROTO=TCP SPT=48020 DPT=9100 SEQ=1198390150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAAEFA0000000001030307) Nov 28 04:24:28 localhost python3.9[161730]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:29 localhost python3.9[161823]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:30 localhost python3.9[161916]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:31 localhost python3.9[162009]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1575 DF PROTO=TCP SPT=38762 DPT=9105 SEQ=3189010300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACABAFB0000000001030307) Nov 28 04:24:32 localhost python3.9[162131]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:32 localhost python3.9[162253]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:33 localhost python3.9[162349]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:24:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48540 DF PROTO=TCP SPT=44100 DPT=9105 SEQ=2256137115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAC63B0000000001030307) Nov 28 04:24:36 localhost python3.9[162457]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None Nov 28 04:24:36 localhost python3.9[162550]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 28 04:24:38 localhost python3.9[162648]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005538515.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Nov 28 04:24:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11996 DF PROTO=TCP SPT=34474 DPT=9102 SEQ=2891308655 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAD77A0000000001030307) Nov 28 04:24:39 localhost python3.9[162748]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:24:40 localhost python3.9[162802]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:24:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48541 DF PROTO=TCP SPT=44100 DPT=9105 SEQ=2256137115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAE6FB0000000001030307) Nov 28 04:24:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35893 DF PROTO=TCP SPT=55554 DPT=9882 SEQ=3425997190 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAEAFB0000000001030307) Nov 28 04:24:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46591 DF PROTO=TCP SPT=57010 DPT=9100 SEQ=729025881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACAF47B0000000001030307) Nov 28 04:24:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:24:46 localhost podman[162872]: 2025-11-28 09:24:46.990205255 +0000 UTC m=+0.094799406 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:24:47 localhost podman[162872]: 2025-11-28 09:24:47.027147406 +0000 UTC m=+0.131741567 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:24:47 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:24:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46592 DF PROTO=TCP SPT=57010 DPT=9100 SEQ=729025881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB043A0000000001030307) Nov 28 04:24:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:24:50.803 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:24:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:24:50.803 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:24:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:24:50.804 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:24:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:24:50 localhost podman[162900]: 2025-11-28 09:24:50.967272188 +0000 UTC m=+0.077497645 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:24:51 localhost podman[162900]: 2025-11-28 09:24:51.001445125 +0000 UTC m=+0.111670582 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:24:51 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:24:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37116 DF PROTO=TCP SPT=51108 DPT=9105 SEQ=3890080462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB1F830000000001030307) Nov 28 04:24:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37117 DF PROTO=TCP SPT=51108 DPT=9105 SEQ=3890080462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB237B0000000001030307) Nov 28 04:24:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33846 DF PROTO=TCP SPT=52758 DPT=9882 SEQ=1136204254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB24AC0000000001030307) Nov 28 04:25:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37497 DF PROTO=TCP SPT=36594 DPT=9105 SEQ=2108030151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB2EFA0000000001030307) Nov 28 04:25:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37119 DF PROTO=TCP SPT=51108 DPT=9105 SEQ=3890080462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB3B3A0000000001030307) Nov 28 04:25:06 localhost kernel: SELinux: Converting 2746 SID table entries... Nov 28 04:25:06 localhost kernel: SELinux: Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped). Nov 28 04:25:06 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:25:06 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:25:06 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:25:06 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:25:06 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:25:06 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:25:06 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:25:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3984 DF PROTO=TCP SPT=38716 DPT=9102 SEQ=2634108566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB4CBA0000000001030307) Nov 28 04:25:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37120 DF PROTO=TCP SPT=51108 DPT=9105 SEQ=3890080462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB5AFB0000000001030307) Nov 28 04:25:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16058 DF PROTO=TCP SPT=59640 DPT=9101 SEQ=4264250608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB60800000000001030307) Nov 28 04:25:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63914 DF PROTO=TCP SPT=41870 DPT=9100 SEQ=952836503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB69BA0000000001030307) Nov 28 04:25:16 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 28 04:25:16 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:25:16 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:25:16 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:25:16 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:25:16 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:25:16 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:25:16 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:25:17 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=20 res=1 Nov 28 04:25:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:25:17 localhost systemd[1]: tmp-crun.b4xROk.mount: Deactivated successfully. Nov 28 04:25:18 localhost podman[163942]: 2025-11-28 09:25:18.006764774 +0000 UTC m=+0.104459851 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:25:18 localhost podman[163942]: 2025-11-28 09:25:18.066872596 +0000 UTC m=+0.164567673 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 04:25:18 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:25:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63915 DF PROTO=TCP SPT=41870 DPT=9100 SEQ=952836503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB797A0000000001030307) Nov 28 04:25:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:25:21 localhost systemd[1]: tmp-crun.1phFrk.mount: Deactivated successfully. Nov 28 04:25:21 localhost podman[163967]: 2025-11-28 09:25:21.98976888 +0000 UTC m=+0.097930161 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:25:22 localhost podman[163967]: 2025-11-28 09:25:22.023328468 +0000 UTC m=+0.131489749 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:25:22 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:25:25 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 28 04:25:25 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:25:25 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:25:25 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:25:25 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:25:25 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:25:25 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:25:25 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:25:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31717 DF PROTO=TCP SPT=47824 DPT=9105 SEQ=610611585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB94B30000000001030307) Nov 28 04:25:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31718 DF PROTO=TCP SPT=47824 DPT=9105 SEQ=610611585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB98BA0000000001030307) Nov 28 04:25:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63916 DF PROTO=TCP SPT=41870 DPT=9100 SEQ=952836503 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACB98FA0000000001030307) Nov 28 04:25:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48543 DF PROTO=TCP SPT=44100 DPT=9105 SEQ=2256137115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBA4FB0000000001030307) Nov 28 04:25:33 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 28 04:25:33 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:25:33 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:25:33 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:25:33 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:25:33 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:25:33 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:25:33 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:25:33 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=22 res=1 Nov 28 04:25:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31720 DF PROTO=TCP SPT=47824 DPT=9105 SEQ=610611585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBB07A0000000001030307) Nov 28 04:25:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36254 DF PROTO=TCP SPT=50342 DPT=9102 SEQ=2546681035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBC1FB0000000001030307) Nov 28 04:25:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31721 DF PROTO=TCP SPT=47824 DPT=9105 SEQ=610611585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBD0FA0000000001030307) Nov 28 04:25:43 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 28 04:25:43 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:25:43 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:25:43 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:25:43 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:25:43 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:25:43 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:25:43 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:25:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33013 DF PROTO=TCP SPT=42454 DPT=9101 SEQ=2394468827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBD5B00000000001030307) Nov 28 04:25:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20738 DF PROTO=TCP SPT=46568 DPT=9100 SEQ=3501637710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBDEFA0000000001030307) Nov 28 04:25:48 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=23 res=1 Nov 28 04:25:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:25:49 localhost podman[164101]: 2025-11-28 09:25:48.99814718 +0000 UTC m=+0.093233621 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:25:49 localhost podman[164101]: 2025-11-28 09:25:49.033775936 +0000 UTC m=+0.128862377 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:25:49 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:25:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20739 DF PROTO=TCP SPT=46568 DPT=9100 SEQ=3501637710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACBEEBA0000000001030307) Nov 28 04:25:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:25:50.804 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:25:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:25:50.805 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:25:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:25:50.805 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:25:51 localhost kernel: SELinux: Converting 2749 SID table entries... Nov 28 04:25:51 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:25:51 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:25:51 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:25:51 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:25:51 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:25:51 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:25:51 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:25:51 localhost systemd[1]: Reloading. Nov 28 04:25:51 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=24 res=1 Nov 28 04:25:52 localhost systemd-rc-local-generator[164160]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:25:52 localhost systemd-sysv-generator[164165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:25:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:25:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:25:52 localhost podman[164172]: 2025-11-28 09:25:52.309248024 +0000 UTC m=+0.074253366 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:25:52 localhost podman[164172]: 2025-11-28 09:25:52.322576154 +0000 UTC m=+0.087581536 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:25:52 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:25:52 localhost systemd[1]: Reloading. Nov 28 04:25:52 localhost systemd-rc-local-generator[164216]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:25:52 localhost systemd-sysv-generator[164219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:25:52 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:25:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34853 DF PROTO=TCP SPT=37268 DPT=9105 SEQ=2440985153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC09E20000000001030307) Nov 28 04:25:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34854 DF PROTO=TCP SPT=37268 DPT=9105 SEQ=2440985153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC0DFA0000000001030307) Nov 28 04:25:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20740 DF PROTO=TCP SPT=46568 DPT=9100 SEQ=3501637710 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC0EFA0000000001030307) Nov 28 04:26:01 localhost kernel: SELinux: Converting 2750 SID table entries... Nov 28 04:26:01 localhost kernel: SELinux: policy capability network_peer_controls=1 Nov 28 04:26:01 localhost kernel: SELinux: policy capability open_perms=1 Nov 28 04:26:01 localhost kernel: SELinux: policy capability extended_socket_class=1 Nov 28 04:26:01 localhost kernel: SELinux: policy capability always_check_network=0 Nov 28 04:26:01 localhost kernel: SELinux: policy capability cgroup_seclabel=1 Nov 28 04:26:01 localhost kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 28 04:26:01 localhost kernel: SELinux: policy capability genfs_seclabel_symlinks=1 Nov 28 04:26:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34216 DF PROTO=TCP SPT=37558 DPT=9882 SEQ=3594735828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC1AFA0000000001030307) Nov 28 04:26:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34856 DF PROTO=TCP SPT=37268 DPT=9105 SEQ=2440985153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC25BA0000000001030307) Nov 28 04:26:06 localhost dbus-broker-launch[750]: Noticed file-system modification, trigger reload. Nov 28 04:26:06 localhost dbus-broker-launch[754]: avc: op=load_policy lsm=selinux seqno=25 res=1 Nov 28 04:26:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45125 DF PROTO=TCP SPT=45008 DPT=9102 SEQ=1571074448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC373A0000000001030307) Nov 28 04:26:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34857 DF PROTO=TCP SPT=37268 DPT=9105 SEQ=2440985153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC46FA0000000001030307) Nov 28 04:26:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14673 DF PROTO=TCP SPT=38030 DPT=9101 SEQ=505000131 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC4AE10000000001030307) Nov 28 04:26:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23070 DF PROTO=TCP SPT=46586 DPT=9100 SEQ=4169613418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC543A0000000001030307) Nov 28 04:26:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:26:20 localhost podman[164962]: 2025-11-28 09:26:20.023384607 +0000 UTC m=+0.118716434 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:26:20 localhost podman[164962]: 2025-11-28 09:26:20.05693748 +0000 UTC m=+0.152269267 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Nov 28 04:26:20 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:26:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23071 DF PROTO=TCP SPT=46586 DPT=9100 SEQ=4169613418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC63FA0000000001030307) Nov 28 04:26:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:26:22 localhost podman[167173]: 2025-11-28 09:26:22.963183425 +0000 UTC m=+0.066557480 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent) Nov 28 04:26:22 localhost podman[167173]: 2025-11-28 09:26:22.996364625 +0000 UTC m=+0.099738700 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Nov 28 04:26:23 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:26:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26048 DF PROTO=TCP SPT=43022 DPT=9105 SEQ=4127241913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC7F130000000001030307) Nov 28 04:26:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26049 DF PROTO=TCP SPT=43022 DPT=9105 SEQ=4127241913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC833A0000000001030307) Nov 28 04:26:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24605 DF PROTO=TCP SPT=44230 DPT=9882 SEQ=948264820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC843C0000000001030307) Nov 28 04:26:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31723 DF PROTO=TCP SPT=47824 DPT=9105 SEQ=610611585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC8EFA0000000001030307) Nov 28 04:26:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26051 DF PROTO=TCP SPT=43022 DPT=9105 SEQ=4127241913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACC9AFA0000000001030307) Nov 28 04:26:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=542 DF PROTO=TCP SPT=54452 DPT=9102 SEQ=1090106761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCAC3A0000000001030307) Nov 28 04:26:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26052 DF PROTO=TCP SPT=43022 DPT=9105 SEQ=4127241913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCBAFA0000000001030307) Nov 28 04:26:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60019 DF PROTO=TCP SPT=47406 DPT=9101 SEQ=2970992170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCC0100000000001030307) Nov 28 04:26:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11028 DF PROTO=TCP SPT=48582 DPT=9100 SEQ=2362422037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCC93B0000000001030307) Nov 28 04:26:47 localhost systemd[1]: Stopping OpenSSH server daemon... Nov 28 04:26:47 localhost systemd[1]: sshd.service: Deactivated successfully. Nov 28 04:26:47 localhost systemd[1]: Stopped OpenSSH server daemon. Nov 28 04:26:47 localhost systemd[1]: Stopped target sshd-keygen.target. Nov 28 04:26:47 localhost systemd[1]: Stopping sshd-keygen.target... Nov 28 04:26:47 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:26:47 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:26:47 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target). Nov 28 04:26:47 localhost systemd[1]: Reached target sshd-keygen.target. Nov 28 04:26:48 localhost systemd[1]: Starting OpenSSH server daemon... Nov 28 04:26:48 localhost sshd[182098]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:26:48 localhost systemd[1]: Started OpenSSH server daemon. Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:48 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:49 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 04:26:50 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 04:26:50 localhost systemd[1]: Reloading. Nov 28 04:26:50 localhost systemd-sysv-generator[182368]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:26:50 localhost systemd-rc-local-generator[182365]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:26:50 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 04:26:50 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 04:26:50 localhost podman[182387]: 2025-11-28 09:26:50.425665365 +0000 UTC m=+0.063001006 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:26:50 localhost podman[182387]: 2025-11-28 09:26:50.526983619 +0000 UTC m=+0.164319260 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible) Nov 28 04:26:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11029 DF PROTO=TCP SPT=48582 DPT=9100 SEQ=2362422037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCD8FA0000000001030307) Nov 28 04:26:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:26:50.805 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:26:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:26:50.806 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:26:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:26:50.806 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:26:51 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:26:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:26:53 localhost systemd[1]: tmp-crun.wGygyL.mount: Deactivated successfully. Nov 28 04:26:53 localhost podman[186237]: 2025-11-28 09:26:53.243276067 +0000 UTC m=+0.101282635 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS) Nov 28 04:26:53 localhost podman[186237]: 2025-11-28 09:26:53.281609397 +0000 UTC m=+0.139615965 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent) Nov 28 04:26:53 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:26:54 localhost python3.9[187012]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:26:55 localhost systemd[1]: Reloading. Nov 28 04:26:55 localhost systemd-rc-local-generator[188123]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:26:55 localhost systemd-sysv-generator[188126]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost python3.9[188411]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:26:56 localhost systemd[1]: Reloading. Nov 28 04:26:56 localhost systemd-rc-local-generator[188602]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:26:56 localhost systemd-sysv-generator[188608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:56 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47820 DF PROTO=TCP SPT=33570 DPT=9105 SEQ=1927594134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCF4430000000001030307) Nov 28 04:26:57 localhost python3.9[189035]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:26:57 localhost systemd[1]: Reloading. Nov 28 04:26:57 localhost systemd-rc-local-generator[189298]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:26:57 localhost systemd-sysv-generator[189301]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47821 DF PROTO=TCP SPT=33570 DPT=9105 SEQ=1927594134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCF83A0000000001030307) Nov 28 04:26:58 localhost python3.9[189713]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:26:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11030 DF PROTO=TCP SPT=48582 DPT=9100 SEQ=2362422037 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACCF8FB0000000001030307) Nov 28 04:26:58 localhost systemd[1]: Reloading. Nov 28 04:26:58 localhost systemd-rc-local-generator[189911]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:26:58 localhost systemd-sysv-generator[189914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:26:58 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:58 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:26:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:00 localhost python3.9[190430]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:01 localhost systemd[1]: Reloading. Nov 28 04:27:01 localhost systemd-rc-local-generator[191032]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:27:01 localhost systemd-sysv-generator[191039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34859 DF PROTO=TCP SPT=37268 DPT=9105 SEQ=2440985153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD04FA0000000001030307) Nov 28 04:27:02 localhost python3.9[191524]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:02 localhost systemd[1]: Reloading. Nov 28 04:27:02 localhost systemd-rc-local-generator[191646]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:27:02 localhost systemd-sysv-generator[191650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:27:02 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:02 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:27:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:03 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 04:27:03 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 04:27:03 localhost systemd[1]: man-db-cache-update.service: Consumed 15.148s CPU time. Nov 28 04:27:03 localhost systemd[1]: run-r3a3807d29206469a80a9ed357c87018e.service: Deactivated successfully. Nov 28 04:27:03 localhost systemd[1]: run-r4e2e26ab81f5435480c307cabef7f63d.service: Deactivated successfully. Nov 28 04:27:04 localhost python3.9[191882]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47823 DF PROTO=TCP SPT=33570 DPT=9105 SEQ=1927594134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD0FFA0000000001030307) Nov 28 04:27:05 localhost systemd[1]: Reloading. Nov 28 04:27:05 localhost systemd-sysv-generator[191915]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:27:05 localhost systemd-rc-local-generator[191909]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:05 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:06 localhost python3.9[192030]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:07 localhost python3.9[192143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:08 localhost systemd[1]: Reloading. Nov 28 04:27:08 localhost systemd-rc-local-generator[192170]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:27:08 localhost systemd-sysv-generator[192176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:08 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40460 DF PROTO=TCP SPT=41452 DPT=9102 SEQ=4005614768 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD217A0000000001030307) Nov 28 04:27:09 localhost python3.9[192291]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:27:09 localhost systemd[1]: Reloading. Nov 28 04:27:09 localhost systemd-sysv-generator[192323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:27:09 localhost systemd-rc-local-generator[192316]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:27:10 localhost python3.9[192441]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:12 localhost python3.9[192554]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47824 DF PROTO=TCP SPT=33570 DPT=9105 SEQ=1927594134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD30FA0000000001030307) Nov 28 04:27:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30968 DF PROTO=TCP SPT=54654 DPT=9882 SEQ=1642028125 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD34FA0000000001030307) Nov 28 04:27:14 localhost python3.9[192667]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:15 localhost python3.9[192780]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12830 DF PROTO=TCP SPT=41304 DPT=9100 SEQ=34626681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD3E7A0000000001030307) Nov 28 04:27:17 localhost python3.9[192893]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:19 localhost python3.9[193006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12831 DF PROTO=TCP SPT=41304 DPT=9100 SEQ=34626681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD4E3A0000000001030307) Nov 28 04:27:20 localhost python3.9[193119]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:21 localhost python3.9[193232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:27:21 localhost podman[193234]: 2025-11-28 09:27:21.906591429 +0000 UTC m=+0.091661227 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:27:21 localhost podman[193234]: 2025-11-28 09:27:21.946429235 +0000 UTC m=+0.131499023 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:27:21 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:27:22 localhost python3.9[193368]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:23 localhost python3.9[193481]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:27:23 localhost podman[193483]: 2025-11-28 09:27:23.438174168 +0000 UTC m=+0.079395742 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 04:27:23 localhost podman[193483]: 2025-11-28 09:27:23.471471948 +0000 UTC m=+0.112693532 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125) Nov 28 04:27:23 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:27:24 localhost python3.9[193610]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:24 localhost python3.9[193723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:25 localhost python3.9[193836]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:26 localhost python3.9[193949]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None Nov 28 04:27:27 localhost python3.9[194062]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:27:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5139 DF PROTO=TCP SPT=51608 DPT=9105 SEQ=3037923953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD69730000000001030307) Nov 28 04:27:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5140 DF PROTO=TCP SPT=51608 DPT=9105 SEQ=3037923953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD6D7A0000000001030307) Nov 28 04:27:28 localhost python3.9[194172]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:27:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38384 DF PROTO=TCP SPT=51510 DPT=9882 SEQ=4238462862 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD6E9C0000000001030307) Nov 28 04:27:29 localhost python3.9[194282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:27:29 localhost python3.9[194392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:27:31 localhost python3.9[194502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:27:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26054 DF PROTO=TCP SPT=43022 DPT=9105 SEQ=4127241913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD78FB0000000001030307) Nov 28 04:27:32 localhost python3.9[194612]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:27:32 localhost python3.9[194722]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:33 localhost python3.9[194812]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322052.2122705-1646-51385911595155/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:34 localhost python3.9[194922]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5142 DF PROTO=TCP SPT=51608 DPT=9105 SEQ=3037923953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD853A0000000001030307) Nov 28 04:27:34 localhost python3.9[195012]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322053.71599-1646-67231418976200/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:35 localhost python3.9[195122]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:35 localhost python3.9[195212]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322054.9062426-1646-85290725561634/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:36 localhost python3.9[195322]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:37 localhost python3.9[195412]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322056.137119-1646-46329029484448/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:37 localhost python3.9[195558]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:38 localhost python3.9[195680]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322057.395518-1646-153893200498097/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:39 localhost python3.9[195807]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16943 DF PROTO=TCP SPT=43692 DPT=9102 SEQ=694514364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACD96BB0000000001030307) Nov 28 04:27:39 localhost python3.9[195898]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322058.6462526-1646-173100488185855/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:40 localhost python3.9[196008]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:41 localhost python3.9[196096]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322059.7951365-1646-89436496203327/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:42 localhost python3.9[196206]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5143 DF PROTO=TCP SPT=51608 DPT=9105 SEQ=3037923953 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDA4FA0000000001030307) Nov 28 04:27:42 localhost python3.9[196296]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764322061.68807-1646-133046340239568/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:43 localhost python3.9[196406]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16744 DF PROTO=TCP SPT=46192 DPT=9101 SEQ=1228987203 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDAA710000000001030307) Nov 28 04:27:44 localhost python3.9[196516]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:45 localhost python3.9[196626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:45 localhost python3.9[196736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:46 localhost python3.9[196846]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46539 DF PROTO=TCP SPT=43862 DPT=9100 SEQ=382430769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDB3BB0000000001030307) Nov 28 04:27:47 localhost python3.9[196956]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:47 localhost python3.9[197066]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:48 localhost python3.9[197176]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:49 localhost python3.9[197286]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:49 localhost python3.9[197396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:50 localhost python3.9[197506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46540 DF PROTO=TCP SPT=43862 DPT=9100 SEQ=382430769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDC37A0000000001030307) Nov 28 04:27:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:27:50.807 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:27:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:27:50.808 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:27:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:27:50.808 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:27:51 localhost python3.9[197616]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:51 localhost python3.9[197726]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:27:52 localhost systemd[1]: tmp-crun.3P4TCw.mount: Deactivated successfully. Nov 28 04:27:52 localhost podman[197837]: 2025-11-28 09:27:52.388469815 +0000 UTC m=+0.097247988 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible) Nov 28 04:27:52 localhost podman[197837]: 2025-11-28 09:27:52.421908799 +0000 UTC m=+0.130686972 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller) Nov 28 04:27:52 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:27:52 localhost python3.9[197836]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:53 localhost python3.9[197971]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:27:53 localhost podman[198082]: 2025-11-28 09:27:53.792251853 +0000 UTC m=+0.082212354 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:27:53 localhost podman[198082]: 2025-11-28 09:27:53.822354834 +0000 UTC m=+0.112315285 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:27:53 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:27:53 localhost python3.9[198081]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:55 localhost python3.9[198187]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322073.3527386-2308-269256967262127/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:55 localhost python3.9[198297]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:57 localhost python3.9[198385]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322075.3172312-2308-29852060132597/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14905 DF PROTO=TCP SPT=33392 DPT=9105 SEQ=4242062705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDDEA20000000001030307) Nov 28 04:27:57 localhost python3.9[198495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:58 localhost python3.9[198583]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322077.4218621-2308-228135076465444/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:27:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14906 DF PROTO=TCP SPT=33392 DPT=9105 SEQ=4242062705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDE2BA0000000001030307) Nov 28 04:27:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46541 DF PROTO=TCP SPT=43862 DPT=9100 SEQ=382430769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDE2FA0000000001030307) Nov 28 04:27:59 localhost python3.9[198693]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:27:59 localhost python3.9[198781]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322078.5871315-2308-241193875116595/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:00 localhost python3.9[198891]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:00 localhost python3.9[198979]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322079.7439036-2308-180250792400484/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:01 localhost python3.9[199089]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47826 DF PROTO=TCP SPT=33570 DPT=9105 SEQ=1927594134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDEEFA0000000001030307) Nov 28 04:28:01 localhost python3.9[199177]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322080.9841394-2308-35524085384709/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:02 localhost python3.9[199287]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:03 localhost python3.9[199375]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322082.1087193-2308-2644194947495/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:03 localhost python3.9[199485]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:04 localhost python3.9[199573]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322083.4275162-2308-11266728512259/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14908 DF PROTO=TCP SPT=33392 DPT=9105 SEQ=4242062705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACDFA7B0000000001030307) Nov 28 04:28:05 localhost python3.9[199683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:05 localhost python3.9[199771]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322084.6467984-2308-115488532257364/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:06 localhost python3.9[199881]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:06 localhost python3.9[199969]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322085.8230546-2308-108243518876229/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:07 localhost python3.9[200079]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:08 localhost python3.9[200167]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322087.0163543-2308-188583093520749/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34620 DF PROTO=TCP SPT=58496 DPT=9102 SEQ=3312259674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE0BFB0000000001030307) Nov 28 04:28:09 localhost python3.9[200277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:10 localhost python3.9[200365]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322089.3391469-2308-221361733904935/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:10 localhost python3.9[200475]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:11 localhost python3.9[200563]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322090.441418-2308-53745199370202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:12 localhost python3.9[200673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14909 DF PROTO=TCP SPT=33392 DPT=9105 SEQ=4242062705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE1AFB0000000001030307) Nov 28 04:28:13 localhost python3.9[200761]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322091.9935346-2308-3041262932487/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32968 DF PROTO=TCP SPT=56624 DPT=9882 SEQ=2895839007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE1F220000000001030307) Nov 28 04:28:14 localhost python3.9[200869]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:15 localhost python3.9[200982]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False Nov 28 04:28:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14131 DF PROTO=TCP SPT=53422 DPT=9100 SEQ=767705073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE28FB0000000001030307) Nov 28 04:28:16 localhost python3.9[201092]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:28:16 localhost systemd[1]: Reloading. Nov 28 04:28:16 localhost systemd-rc-local-generator[201113]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:28:16 localhost systemd-sysv-generator[201117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:16 localhost systemd[1]: Starting libvirt logging daemon socket... Nov 28 04:28:16 localhost systemd[1]: Listening on libvirt logging daemon socket. Nov 28 04:28:16 localhost systemd[1]: Starting libvirt logging daemon admin socket... Nov 28 04:28:16 localhost systemd[1]: Listening on libvirt logging daemon admin socket. Nov 28 04:28:16 localhost systemd[1]: Starting libvirt logging daemon... Nov 28 04:28:17 localhost systemd[1]: Started libvirt logging daemon. Nov 28 04:28:17 localhost python3.9[201243]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:28:17 localhost systemd[1]: Reloading. Nov 28 04:28:18 localhost systemd-rc-local-generator[201267]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:28:18 localhost systemd-sysv-generator[201272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:18 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Nov 28 04:28:18 localhost systemd[1]: Starting libvirt nodedev daemon socket... Nov 28 04:28:18 localhost systemd[1]: Listening on libvirt nodedev daemon socket. Nov 28 04:28:18 localhost systemd[1]: Starting libvirt nodedev daemon admin socket... Nov 28 04:28:18 localhost systemd[1]: Starting libvirt nodedev daemon read-only socket... Nov 28 04:28:18 localhost systemd[1]: Listening on libvirt nodedev daemon admin socket. Nov 28 04:28:18 localhost systemd[1]: Listening on libvirt nodedev daemon read-only socket. Nov 28 04:28:18 localhost systemd[1]: Started libvirt nodedev daemon. Nov 28 04:28:18 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Nov 28 04:28:18 localhost systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged. Nov 28 04:28:18 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service. Nov 28 04:28:19 localhost python3.9[201426]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:28:19 localhost systemd[1]: Reloading. Nov 28 04:28:19 localhost systemd-rc-local-generator[201453]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:28:19 localhost systemd-sysv-generator[201458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:19 localhost systemd[1]: Starting libvirt proxy daemon socket... Nov 28 04:28:19 localhost systemd[1]: Listening on libvirt proxy daemon socket. Nov 28 04:28:19 localhost systemd[1]: Starting libvirt proxy daemon admin socket... Nov 28 04:28:19 localhost systemd[1]: Starting libvirt proxy daemon read-only socket... Nov 28 04:28:19 localhost systemd[1]: Listening on libvirt proxy daemon admin socket. Nov 28 04:28:19 localhost systemd[1]: Listening on libvirt proxy daemon read-only socket. Nov 28 04:28:19 localhost systemd[1]: Started libvirt proxy daemon. Nov 28 04:28:19 localhost setroubleshoot[201281]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 29696a87-b25b-4ed1-b530-193abd3ed6b3 Nov 28 04:28:19 localhost setroubleshoot[201281]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Nov 28 04:28:19 localhost setroubleshoot[201281]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 29696a87-b25b-4ed1-b530-193abd3ed6b3 Nov 28 04:28:19 localhost setroubleshoot[201281]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012***** Plugin dac_override (91.4 confidence) suggests **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012***** Plugin catchall (9.59 confidence) suggests **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012 Nov 28 04:28:20 localhost python3.9[201600]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:28:20 localhost systemd[1]: Reloading. Nov 28 04:28:20 localhost systemd-rc-local-generator[201623]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:28:20 localhost systemd-sysv-generator[201628]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:20 localhost systemd[1]: Listening on libvirt locking daemon socket. Nov 28 04:28:20 localhost systemd[1]: Starting libvirt QEMU daemon socket... Nov 28 04:28:20 localhost systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 28 04:28:20 localhost systemd[1]: Starting Virtual Machine and Container Registration Service... Nov 28 04:28:20 localhost systemd[1]: Listening on libvirt QEMU daemon socket. Nov 28 04:28:20 localhost systemd[1]: Starting libvirt QEMU daemon admin socket... Nov 28 04:28:20 localhost systemd[1]: Starting libvirt QEMU daemon read-only socket... Nov 28 04:28:20 localhost systemd[1]: Listening on libvirt QEMU daemon admin socket. Nov 28 04:28:20 localhost systemd[1]: Listening on libvirt QEMU daemon read-only socket. Nov 28 04:28:20 localhost systemd[1]: Started Virtual Machine and Container Registration Service. Nov 28 04:28:20 localhost systemd[1]: Started libvirt QEMU daemon. Nov 28 04:28:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14132 DF PROTO=TCP SPT=53422 DPT=9100 SEQ=767705073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE38BA0000000001030307) Nov 28 04:28:21 localhost python3.9[201774]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:28:21 localhost systemd[1]: Reloading. Nov 28 04:28:21 localhost systemd-rc-local-generator[201800]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:28:21 localhost systemd-sysv-generator[201805]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:21 localhost systemd[1]: Starting libvirt secret daemon socket... Nov 28 04:28:21 localhost systemd[1]: Listening on libvirt secret daemon socket. Nov 28 04:28:21 localhost systemd[1]: Starting libvirt secret daemon admin socket... Nov 28 04:28:21 localhost systemd[1]: Starting libvirt secret daemon read-only socket... Nov 28 04:28:21 localhost systemd[1]: Listening on libvirt secret daemon admin socket. Nov 28 04:28:21 localhost systemd[1]: Listening on libvirt secret daemon read-only socket. Nov 28 04:28:21 localhost systemd[1]: Started libvirt secret daemon. Nov 28 04:28:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:28:22 localhost systemd[1]: tmp-crun.7LATM0.mount: Deactivated successfully. Nov 28 04:28:22 localhost podman[201853]: 2025-11-28 09:28:22.983473079 +0000 UTC m=+0.090949525 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible) Nov 28 04:28:23 localhost podman[201853]: 2025-11-28 09:28:23.051471592 +0000 UTC m=+0.158948068 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:28:23 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:28:23 localhost python3.9[201970]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:28:23 localhost podman[202042]: 2025-11-28 09:28:23.98674786 +0000 UTC m=+0.080306666 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 04:28:23 localhost podman[202042]: 2025-11-28 09:28:23.996385157 +0000 UTC m=+0.089943953 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:28:24 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:28:24 localhost python3.9[202099]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:28:25 localhost python3.9[202209]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:26 localhost python3.9[202321]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:28:27 localhost python3.9[202429]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57641 DF PROTO=TCP SPT=41550 DPT=9105 SEQ=2528829746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE53D20000000001030307) Nov 28 04:28:28 localhost python3.9[202515]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322107.061128-3172-105405855256949/.source.xml follow=False _original_basename=secret.xml.j2 checksum=817431989b0a3ade349fa0105099056ad78b021d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57642 DF PROTO=TCP SPT=41550 DPT=9105 SEQ=2528829746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE57FA0000000001030307) Nov 28 04:28:28 localhost python3.9[202625]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 2c5417c9-00eb-57d5-a565-ddecbc7995c1#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14133 DF PROTO=TCP SPT=53422 DPT=9100 SEQ=767705073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE58FA0000000001030307) Nov 28 04:28:29 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully. Nov 28 04:28:29 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Nov 28 04:28:30 localhost python3.9[202745]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44635 DF PROTO=TCP SPT=36840 DPT=9882 SEQ=2416056394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE64FA0000000001030307) Nov 28 04:28:32 localhost python3.9[203082]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:33 localhost python3.9[203192]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:28:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 4784 writes, 21K keys, 4784 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4784 writes, 637 syncs, 7.51 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.2 0.01 0.00 1 0.005 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55ab8a42b350#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Nov 28 04:28:34 localhost python3.9[203280]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322113.0430126-3338-216008786423962/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57644 DF PROTO=TCP SPT=41550 DPT=9105 SEQ=2528829746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE6FBA0000000001030307) Nov 28 04:28:35 localhost python3.9[203390]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:36 localhost python3.9[203500]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:36 localhost python3.9[203557]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:37 localhost python3.9[203667]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:38 localhost python3.9[203724]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n6s81c7c recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:28:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.2 total, 600.0 interval#012Cumulative writes: 5781 writes, 25K keys, 5781 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5781 writes, 729 syncs, 7.93 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 2/0 2.61 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Sum 2/0 2.61 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.03 0.00 1 0.025 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-0] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562bf8a0b610#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0#012#012** Compaction Stats [m-1] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.2 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl Nov 28 04:28:38 localhost python3.9[203834]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25187 DF PROTO=TCP SPT=48762 DPT=9102 SEQ=3832156165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE80FA0000000001030307) Nov 28 04:28:39 localhost python3.9[203927]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:40 localhost python3.9[204068]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:41 localhost python3[204197]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall Nov 28 04:28:41 localhost python3.9[204307]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:42 localhost python3.9[204364]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:43 localhost python3.9[204474]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57645 DF PROTO=TCP SPT=41550 DPT=9105 SEQ=2528829746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE90FB0000000001030307) Nov 28 04:28:43 localhost python3.9[204531]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37771 DF PROTO=TCP SPT=43138 DPT=9101 SEQ=1486897896 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE94D00000000001030307) Nov 28 04:28:44 localhost python3.9[204641]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:44 localhost python3.9[204698]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:45 localhost python3.9[204808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:46 localhost python3.9[204865]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51985 DF PROTO=TCP SPT=49654 DPT=9100 SEQ=3981679216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACE9DFA0000000001030307) Nov 28 04:28:46 localhost python3.9[204975]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:47 localhost python3.9[205065]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764322126.419052-3713-219472289816993/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:48 localhost python3.9[205175]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:49 localhost python3.9[205285]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:50 localhost python3.9[205398]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51986 DF PROTO=TCP SPT=49654 DPT=9100 SEQ=3981679216 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACEADBA0000000001030307) Nov 28 04:28:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:28:50.808 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:28:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:28:50.809 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:28:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:28:50.809 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:28:51 localhost python3.9[205508]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:52 localhost python3.9[205620]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:28:53 localhost python3.9[205732]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:28:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:28:53 localhost systemd[1]: tmp-crun.ydcLLB.mount: Deactivated successfully. Nov 28 04:28:53 localhost podman[205846]: 2025-11-28 09:28:53.773810236 +0000 UTC m=+0.090439980 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller) Nov 28 04:28:53 localhost podman[205846]: 2025-11-28 09:28:53.84675863 +0000 UTC m=+0.163388364 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:28:53 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:28:53 localhost python3.9[205845]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:28:54 localhost systemd[1]: tmp-crun.US96oS.mount: Deactivated successfully. Nov 28 04:28:54 localhost podman[205980]: 2025-11-28 09:28:54.607911721 +0000 UTC m=+0.133236552 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 04:28:54 localhost podman[205980]: 2025-11-28 09:28:54.641443907 +0000 UTC m=+0.166768738 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:28:54 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:28:54 localhost python3.9[205979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:55 localhost python3.9[206083]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322134.1557379-3929-40441686076193/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:56 localhost python3.9[206193]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:56 localhost python3.9[206281]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322135.9008873-3975-200302596605283/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41330 DF PROTO=TCP SPT=44712 DPT=9105 SEQ=381928380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACEC9020000000001030307) Nov 28 04:28:57 localhost python3.9[206391]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:28:58 localhost python3.9[206479]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322137.1153536-4020-136733623472910/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:28:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41331 DF PROTO=TCP SPT=44712 DPT=9105 SEQ=381928380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACECCFA0000000001030307) Nov 28 04:28:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41579 DF PROTO=TCP SPT=45910 DPT=9882 SEQ=1699515411 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACECE2C0000000001030307) Nov 28 04:28:58 localhost python3.9[206589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:28:58 localhost systemd[1]: Reloading. Nov 28 04:28:59 localhost systemd-sysv-generator[206619]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:28:59 localhost systemd-rc-local-generator[206614]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:28:59 localhost systemd[1]: Reached target edpm_libvirt.target. Nov 28 04:29:00 localhost python3.9[206739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None Nov 28 04:29:00 localhost systemd[1]: Reloading. Nov 28 04:29:00 localhost systemd-rc-local-generator[206762]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:29:00 localhost systemd-sysv-generator[206765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:00 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: Reloading. Nov 28 04:29:01 localhost systemd-rc-local-generator[206799]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:29:01 localhost systemd-sysv-generator[206802]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14911 DF PROTO=TCP SPT=33392 DPT=9105 SEQ=4242062705 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACED8FB0000000001030307) Nov 28 04:29:02 localhost systemd[1]: session-52.scope: Deactivated successfully. Nov 28 04:29:02 localhost systemd[1]: session-52.scope: Consumed 3min 38.127s CPU time. Nov 28 04:29:02 localhost systemd-logind[763]: Session 52 logged out. Waiting for processes to exit. Nov 28 04:29:02 localhost systemd-logind[763]: Removed session 52. Nov 28 04:29:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41333 DF PROTO=TCP SPT=44712 DPT=9105 SEQ=381928380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACEE4BA0000000001030307) Nov 28 04:29:08 localhost sshd[206830]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:29:08 localhost systemd-logind[763]: New session 53 of user zuul. Nov 28 04:29:08 localhost systemd[1]: Started Session 53 of User zuul. Nov 28 04:29:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28587 DF PROTO=TCP SPT=38402 DPT=9102 SEQ=3051273201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACEF63A0000000001030307) Nov 28 04:29:09 localhost python3.9[206941]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:29:11 localhost python3.9[207053]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:29:11 localhost network[207070]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:29:11 localhost network[207071]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:29:11 localhost network[207072]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:29:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:29:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41334 DF PROTO=TCP SPT=44712 DPT=9105 SEQ=381928380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF04FB0000000001030307) Nov 28 04:29:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28446 DF PROTO=TCP SPT=38730 DPT=9101 SEQ=4030811793 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF0A010000000001030307) Nov 28 04:29:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48449 DF PROTO=TCP SPT=34136 DPT=9100 SEQ=1636643879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF133B0000000001030307) Nov 28 04:29:17 localhost python3.9[207304]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:29:18 localhost python3.9[207367]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:29:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48450 DF PROTO=TCP SPT=34136 DPT=9100 SEQ=1636643879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF22FA0000000001030307) Nov 28 04:29:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:29:23 localhost systemd[1]: tmp-crun.j1dYou.mount: Deactivated successfully. Nov 28 04:29:24 localhost podman[207370]: 2025-11-28 09:29:23.999646882 +0000 UTC m=+0.099275481 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:29:24 localhost podman[207370]: 2025-11-28 09:29:24.079817917 +0000 UTC m=+0.179446487 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller) Nov 28 04:29:24 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:29:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:29:24 localhost podman[207394]: 2025-11-28 09:29:24.972743574 +0000 UTC m=+0.078761273 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:29:25 localhost podman[207394]: 2025-11-28 09:29:25.006457716 +0000 UTC m=+0.112475485 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:29:25 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:29:26 localhost python3.9[207523]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:29:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8637 DF PROTO=TCP SPT=50356 DPT=9105 SEQ=3978927549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF3E330000000001030307) Nov 28 04:29:27 localhost python3.9[207635]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:28 localhost python3.9[207745]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"#012 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:29:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8638 DF PROTO=TCP SPT=50356 DPT=9105 SEQ=3978927549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF423B0000000001030307) Nov 28 04:29:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48451 DF PROTO=TCP SPT=34136 DPT=9100 SEQ=1636643879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF42FA0000000001030307) Nov 28 04:29:29 localhost python3.9[207856]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:29:29 localhost python3.9[207967]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:29:30 localhost python3.9[208078]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:29:31 localhost python3.9[208190]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57647 DF PROTO=TCP SPT=41550 DPT=9105 SEQ=2528829746 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF4EFA0000000001030307) Nov 28 04:29:33 localhost python3.9[208300]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:29:34 localhost systemd[1]: Listening on Open-iSCSI iscsid Socket. Nov 28 04:29:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8640 DF PROTO=TCP SPT=50356 DPT=9105 SEQ=3978927549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF59FB0000000001030307) Nov 28 04:29:35 localhost python3.9[208414]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:29:35 localhost systemd[1]: Reloading. Nov 28 04:29:35 localhost systemd-rc-local-generator[208439]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:29:35 localhost systemd-sysv-generator[208444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:29:35 localhost systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi). Nov 28 04:29:35 localhost systemd[1]: Starting Open-iSCSI... Nov 28 04:29:35 localhost iscsid[208454]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 28 04:29:35 localhost iscsid[208454]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 28 04:29:35 localhost iscsid[208454]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 28 04:29:35 localhost iscsid[208454]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 28 04:29:35 localhost iscsid[208454]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 28 04:29:35 localhost iscsid[208454]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 28 04:29:35 localhost iscsid[208454]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf Nov 28 04:29:35 localhost systemd[1]: Started Open-iSCSI. Nov 28 04:29:35 localhost systemd[1]: Starting Logout off all iSCSI sessions on shutdown... Nov 28 04:29:35 localhost systemd[1]: Finished Logout off all iSCSI sessions on shutdown. Nov 28 04:29:37 localhost systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs... Nov 28 04:29:37 localhost systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs. Nov 28 04:29:37 localhost systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service. Nov 28 04:29:37 localhost python3.9[208566]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:29:37 localhost network[208596]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:29:37 localhost network[208597]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:29:37 localhost network[208598]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 1a700e1f-be86-4e1b-b86c-07704bf948ee Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 1a700e1f-be86-4e1b-b86c-07704bf948ee Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 1a700e1f-be86-4e1b-b86c-07704bf948ee Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 1a700e1f-be86-4e1b-b86c-07704bf948ee Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 1a700e1f-be86-4e1b-b86c-07704bf948ee Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 1a700e1f-be86-4e1b-b86c-07704bf948ee Nov 28 04:29:38 localhost setroubleshoot[208489]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.#012#012***** Plugin catchall (100. confidence) suggests **************************#012#012If you believe that iscsid should be allowed search access on the iscsi directory by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid#012# semodule -X 300 -i my-iscsid.pp#012 Nov 28 04:29:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8698 DF PROTO=TCP SPT=34248 DPT=9102 SEQ=2145120188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF6B7A0000000001030307) Nov 28 04:29:40 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:29:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8641 DF PROTO=TCP SPT=50356 DPT=9105 SEQ=3978927549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF7AFA0000000001030307) Nov 28 04:29:43 localhost python3.9[208917]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 04:29:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4748 DF PROTO=TCP SPT=55146 DPT=9882 SEQ=358108239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF7EFB0000000001030307) Nov 28 04:29:44 localhost python3.9[209027]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Nov 28 04:29:45 localhost python3.9[209141]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:29:45 localhost python3.9[209229]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322184.8981476-458-171705779550144/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27688 DF PROTO=TCP SPT=35896 DPT=9100 SEQ=904848703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF887A0000000001030307) Nov 28 04:29:46 localhost python3.9[209339]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:47 localhost python3.9[209449]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:29:47 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 28 04:29:47 localhost systemd[1]: Stopped Load Kernel Modules. Nov 28 04:29:47 localhost systemd[1]: Stopping Load Kernel Modules... Nov 28 04:29:47 localhost systemd[1]: Starting Load Kernel Modules... Nov 28 04:29:47 localhost systemd-modules-load[209453]: Module 'msr' is built in Nov 28 04:29:47 localhost systemd[1]: Finished Load Kernel Modules. Nov 28 04:29:48 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully. Nov 28 04:29:48 localhost systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Consumed 1.007s CPU time. Nov 28 04:29:48 localhost systemd[1]: setroubleshootd.service: Deactivated successfully. Nov 28 04:29:49 localhost python3.9[209564]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:29:49 localhost python3.9[209674]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:29:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27689 DF PROTO=TCP SPT=35896 DPT=9100 SEQ=904848703 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACF983A0000000001030307) Nov 28 04:29:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:29:50.809 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:29:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:29:50.810 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:29:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:29:50.810 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:29:51 localhost python3.9[209784]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:29:52 localhost python3.9[209894]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:29:52 localhost python3.9[209982]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322191.9403815-632-199712378350050/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:53 localhost python3.9[210092]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:29:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:29:54 localhost systemd[1]: tmp-crun.NMLxPx.mount: Deactivated successfully. Nov 28 04:29:54 localhost podman[210204]: 2025-11-28 09:29:54.355865575 +0000 UTC m=+0.097321478 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:29:54 localhost podman[210204]: 2025-11-28 09:29:54.430266233 +0000 UTC m=+0.171722156 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 28 04:29:54 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:29:54 localhost python3.9[210203]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:29:55 localhost podman[210338]: 2025-11-28 09:29:55.253845091 +0000 UTC m=+0.086956807 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 28 04:29:55 localhost podman[210338]: 2025-11-28 09:29:55.284568954 +0000 UTC m=+0.117680620 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:29:55 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:29:55 localhost systemd[1]: tmp-crun.4LXkY3.mount: Deactivated successfully. Nov 28 04:29:55 localhost python3.9[210337]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:56 localhost python3.9[210463]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:57 localhost python3.9[210573]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3190 DF PROTO=TCP SPT=48832 DPT=9105 SEQ=3226923448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFB3630000000001030307) Nov 28 04:29:57 localhost python3.9[210683]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:58 localhost python3.9[210793]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3191 DF PROTO=TCP SPT=48832 DPT=9105 SEQ=3226923448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFB77B0000000001030307) Nov 28 04:29:58 localhost python3.9[210903]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:29:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53077 DF PROTO=TCP SPT=52310 DPT=9882 SEQ=180747497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFB88C0000000001030307) Nov 28 04:29:59 localhost python3.9[211013]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:30:00 localhost python3.9[211125]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:01 localhost python3.9[211235]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:30:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41336 DF PROTO=TCP SPT=44712 DPT=9105 SEQ=381928380 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFC2FB0000000001030307) Nov 28 04:30:01 localhost python3.9[211345]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:03 localhost python3.9[211402]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:30:03 localhost python3.9[211512]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:04 localhost python3.9[211569]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:30:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3193 DF PROTO=TCP SPT=48832 DPT=9105 SEQ=3226923448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFCF3A0000000001030307) Nov 28 04:30:06 localhost python3.9[211679]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:06 localhost python3.9[211789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:07 localhost python3.9[211846]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:08 localhost python3.9[211956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:08 localhost python3.9[212013]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42834 DF PROTO=TCP SPT=51478 DPT=9102 SEQ=1356090074 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFE0BA0000000001030307) Nov 28 04:30:09 localhost python3.9[212123]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:30:09 localhost systemd[1]: Reloading. Nov 28 04:30:09 localhost systemd-rc-local-generator[212146]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:09 localhost systemd-sysv-generator[212149]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:10 localhost python3.9[212270]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:10 localhost python3.9[212327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:11 localhost python3.9[212437]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:12 localhost python3.9[212494]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3194 DF PROTO=TCP SPT=48832 DPT=9105 SEQ=3226923448 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFEEFA0000000001030307) Nov 28 04:30:12 localhost python3.9[212604]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:30:12 localhost systemd[1]: Reloading. Nov 28 04:30:13 localhost systemd-rc-local-generator[212630]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:13 localhost systemd-sysv-generator[212634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:13 localhost systemd[1]: Starting Create netns directory... Nov 28 04:30:13 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 04:30:13 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 04:30:13 localhost systemd[1]: Finished Create netns directory. Nov 28 04:30:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40665 DF PROTO=TCP SPT=51526 DPT=9101 SEQ=3212966188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFF4610000000001030307) Nov 28 04:30:14 localhost python3.9[212756]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:30:15 localhost python3.9[212866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:15 localhost python3.9[212954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322214.5598152-1253-161988379727081/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:30:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30374 DF PROTO=TCP SPT=49952 DPT=9100 SEQ=2669280850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ACFFDBA0000000001030307) Nov 28 04:30:16 localhost python3.9[213064]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:30:17 localhost python3.9[213174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:18 localhost systemd[1]: virtnodedevd.service: Deactivated successfully. Nov 28 04:30:19 localhost python3.9[213263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322217.2370613-1327-255706693340777/.source.json _original_basename=.eduy01wy follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:19 localhost systemd[1]: virtproxyd.service: Deactivated successfully. Nov 28 04:30:19 localhost python3.9[213374]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30375 DF PROTO=TCP SPT=49952 DPT=9100 SEQ=2669280850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD00D7A0000000001030307) Nov 28 04:30:22 localhost python3.9[213682]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Nov 28 04:30:23 localhost python3.9[213792]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:30:24 localhost python3.9[213902]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 28 04:30:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:30:24 localhost podman[213947]: 2025-11-28 09:30:24.977112667 +0000 UTC m=+0.080715785 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 04:30:25 localhost podman[213947]: 2025-11-28 09:30:25.076491899 +0000 UTC m=+0.180094977 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true) Nov 28 04:30:25 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:30:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:30:25 localhost podman[213972]: 2025-11-28 09:30:25.964452174 +0000 UTC m=+0.076322219 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:30:25 localhost podman[213972]: 2025-11-28 09:30:25.999422367 +0000 UTC m=+0.111292432 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 04:30:26 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:30:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3062 DF PROTO=TCP SPT=44944 DPT=9105 SEQ=1053215145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD028940000000001030307) Nov 28 04:30:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3063 DF PROTO=TCP SPT=44944 DPT=9105 SEQ=1053215145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD02CBA0000000001030307) Nov 28 04:30:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30376 DF PROTO=TCP SPT=49952 DPT=9100 SEQ=2669280850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD02CFA0000000001030307) Nov 28 04:30:28 localhost python3[214082]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:30:31 localhost podman[214095]: 2025-11-28 09:30:28.933215451 +0000 UTC m=+0.048360590 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Nov 28 04:30:31 localhost podman[214145]: Nov 28 04:30:31 localhost podman[214145]: 2025-11-28 09:30:31.191036973 +0000 UTC m=+0.056683068 container create cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd) Nov 28 04:30:31 localhost podman[214145]: 2025-11-28 09:30:31.165007946 +0000 UTC m=+0.030654091 image pull quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Nov 28 04:30:31 localhost python3[214082]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified Nov 28 04:30:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8643 DF PROTO=TCP SPT=50356 DPT=9105 SEQ=3978927549 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD038FB0000000001030307) Nov 28 04:30:32 localhost python3.9[214291]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:30:32 localhost systemd[1]: virtqemud.service: Deactivated successfully. Nov 28 04:30:32 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Nov 28 04:30:34 localhost python3.9[214405]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:34 localhost python3.9[214460]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:30:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3065 DF PROTO=TCP SPT=44944 DPT=9105 SEQ=1053215145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0447A0000000001030307) Nov 28 04:30:35 localhost python3.9[214569]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322234.5241246-1591-130175300280513/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:35 localhost python3.9[214624]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:30:35 localhost systemd[1]: Reloading. Nov 28 04:30:35 localhost systemd-rc-local-generator[214645]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:35 localhost systemd-sysv-generator[214650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:35 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost python3.9[214714]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:30:36 localhost systemd[1]: Reloading. Nov 28 04:30:36 localhost systemd-rc-local-generator[214742]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:36 localhost systemd-sysv-generator[214748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:36 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:37 localhost systemd[1]: Starting multipathd container... Nov 28 04:30:37 localhost systemd[1]: Started libcrun container. Nov 28 04:30:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d42f4221e88fe424ad9b23bcf9b91099549aefec136c75a9ba145b2cc16629/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 28 04:30:37 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d42f4221e88fe424ad9b23bcf9b91099549aefec136c75a9ba145b2cc16629/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 04:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:30:37 localhost podman[214756]: 2025-11-28 09:30:37.21081971 +0000 UTC m=+0.146851264 container init cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:30:37 localhost multipathd[214770]: + sudo -E kolla_set_configs Nov 28 04:30:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:30:37 localhost podman[214756]: 2025-11-28 09:30:37.245228757 +0000 UTC m=+0.181260271 container start cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125) Nov 28 04:30:37 localhost podman[214756]: multipathd Nov 28 04:30:37 localhost systemd[1]: Started multipathd container. Nov 28 04:30:37 localhost multipathd[214770]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:30:37 localhost multipathd[214770]: INFO:__main__:Validating config file Nov 28 04:30:37 localhost multipathd[214770]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:30:37 localhost multipathd[214770]: INFO:__main__:Writing out command to execute Nov 28 04:30:37 localhost multipathd[214770]: ++ cat /run_command Nov 28 04:30:37 localhost multipathd[214770]: + CMD='/usr/sbin/multipathd -d' Nov 28 04:30:37 localhost multipathd[214770]: + ARGS= Nov 28 04:30:37 localhost multipathd[214770]: + sudo kolla_copy_cacerts Nov 28 04:30:37 localhost multipathd[214770]: + [[ ! -n '' ]] Nov 28 04:30:37 localhost multipathd[214770]: + . kolla_extend_start Nov 28 04:30:37 localhost multipathd[214770]: Running command: '/usr/sbin/multipathd -d' Nov 28 04:30:37 localhost multipathd[214770]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Nov 28 04:30:37 localhost multipathd[214770]: + umask 0022 Nov 28 04:30:37 localhost multipathd[214770]: + exec /usr/sbin/multipathd -d Nov 28 04:30:37 localhost podman[214778]: 2025-11-28 09:30:37.336205248 +0000 UTC m=+0.085725739 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:30:37 localhost multipathd[214770]: 10061.561487 | --------start up-------- Nov 28 04:30:37 localhost multipathd[214770]: 10061.561508 | read /etc/multipath.conf Nov 28 04:30:37 localhost multipathd[214770]: 10061.565375 | path checkers start up Nov 28 04:30:37 localhost podman[214778]: 2025-11-28 09:30:37.355465116 +0000 UTC m=+0.104985637 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd) Nov 28 04:30:37 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:30:38 localhost python3.9[214916]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:30:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59047 DF PROTO=TCP SPT=46606 DPT=9102 SEQ=3210608279 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD055BA0000000001030307) Nov 28 04:30:40 localhost python3.9[215028]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:30:41 localhost python3.9[215151]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:30:42 localhost systemd[1]: Stopping multipathd container... Nov 28 04:30:42 localhost multipathd[214770]: 10067.046254 | exit (signal) Nov 28 04:30:42 localhost multipathd[214770]: 10067.047764 | --------shut down------- Nov 28 04:30:42 localhost systemd[1]: libpod-cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.scope: Deactivated successfully. Nov 28 04:30:42 localhost podman[215214]: 2025-11-28 09:30:42.855795215 +0000 UTC m=+0.104351347 container died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:30:42 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.timer: Deactivated successfully. Nov 28 04:30:42 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:30:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f-userdata-shm.mount: Deactivated successfully. Nov 28 04:30:42 localhost systemd[1]: var-lib-containers-storage-overlay-80d42f4221e88fe424ad9b23bcf9b91099549aefec136c75a9ba145b2cc16629-merged.mount: Deactivated successfully. Nov 28 04:30:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3066 DF PROTO=TCP SPT=44944 DPT=9105 SEQ=1053215145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD064FB0000000001030307) Nov 28 04:30:43 localhost podman[215214]: 2025-11-28 09:30:43.123804286 +0000 UTC m=+0.372360388 container cleanup cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0) Nov 28 04:30:43 localhost podman[215214]: multipathd Nov 28 04:30:43 localhost podman[215247]: 2025-11-28 09:30:43.23197849 +0000 UTC m=+0.076213985 container cleanup cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=multipathd) Nov 28 04:30:43 localhost podman[215247]: multipathd Nov 28 04:30:43 localhost systemd[1]: edpm_multipathd.service: Deactivated successfully. Nov 28 04:30:43 localhost systemd[1]: Stopped multipathd container. Nov 28 04:30:43 localhost systemd[1]: Starting multipathd container... Nov 28 04:30:43 localhost systemd[1]: Started libcrun container. Nov 28 04:30:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d42f4221e88fe424ad9b23bcf9b91099549aefec136c75a9ba145b2cc16629/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 28 04:30:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80d42f4221e88fe424ad9b23bcf9b91099549aefec136c75a9ba145b2cc16629/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 04:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:30:43 localhost podman[215260]: 2025-11-28 09:30:43.408436382 +0000 UTC m=+0.145227514 container init cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 04:30:43 localhost multipathd[215273]: + sudo -E kolla_set_configs Nov 28 04:30:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:30:43 localhost podman[215260]: 2025-11-28 09:30:43.452681653 +0000 UTC m=+0.189472845 container start cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 04:30:43 localhost podman[215260]: multipathd Nov 28 04:30:43 localhost systemd[1]: Started multipathd container. Nov 28 04:30:43 localhost multipathd[215273]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:30:43 localhost multipathd[215273]: INFO:__main__:Validating config file Nov 28 04:30:43 localhost multipathd[215273]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:30:43 localhost multipathd[215273]: INFO:__main__:Writing out command to execute Nov 28 04:30:43 localhost multipathd[215273]: ++ cat /run_command Nov 28 04:30:43 localhost multipathd[215273]: + CMD='/usr/sbin/multipathd -d' Nov 28 04:30:43 localhost multipathd[215273]: + ARGS= Nov 28 04:30:43 localhost multipathd[215273]: + sudo kolla_copy_cacerts Nov 28 04:30:43 localhost podman[215282]: 2025-11-28 09:30:43.543644605 +0000 UTC m=+0.083145340 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:30:43 localhost multipathd[215273]: + [[ ! -n '' ]] Nov 28 04:30:43 localhost multipathd[215273]: + . kolla_extend_start Nov 28 04:30:43 localhost multipathd[215273]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\''' Nov 28 04:30:43 localhost multipathd[215273]: Running command: '/usr/sbin/multipathd -d' Nov 28 04:30:43 localhost multipathd[215273]: + umask 0022 Nov 28 04:30:43 localhost multipathd[215273]: + exec /usr/sbin/multipathd -d Nov 28 04:30:43 localhost multipathd[215273]: 10067.784746 | --------start up-------- Nov 28 04:30:43 localhost multipathd[215273]: 10067.784768 | read /etc/multipath.conf Nov 28 04:30:43 localhost multipathd[215273]: 10067.788915 | path checkers start up Nov 28 04:30:43 localhost podman[215282]: 2025-11-28 09:30:43.579861878 +0000 UTC m=+0.119362603 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Nov 28 04:30:43 localhost podman[215282]: unhealthy Nov 28 04:30:43 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:30:43 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Failed with result 'exit-code'. Nov 28 04:30:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13556 DF PROTO=TCP SPT=44324 DPT=9882 SEQ=1017419348 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD068FA0000000001030307) Nov 28 04:30:44 localhost python3.9[215421]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:45 localhost python3.9[215531]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 04:30:46 localhost python3.9[215659]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Nov 28 04:30:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14802 DF PROTO=TCP SPT=48694 DPT=9100 SEQ=3818782677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD072BA0000000001030307) Nov 28 04:30:46 localhost python3.9[215778]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:30:47 localhost python3.9[215866]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322246.4637814-1832-172212941398812/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:48 localhost python3.9[215976]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:49 localhost python3.9[216086]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:30:49 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 28 04:30:49 localhost systemd[1]: Stopped Load Kernel Modules. Nov 28 04:30:49 localhost systemd[1]: Stopping Load Kernel Modules... Nov 28 04:30:49 localhost systemd[1]: Starting Load Kernel Modules... Nov 28 04:30:49 localhost systemd-modules-load[216090]: Module 'msr' is built in Nov 28 04:30:49 localhost systemd[1]: Finished Load Kernel Modules. Nov 28 04:30:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14803 DF PROTO=TCP SPT=48694 DPT=9100 SEQ=3818782677 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0827A0000000001030307) Nov 28 04:30:50 localhost python3.9[216201]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:30:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:30:50.810 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:30:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:30:50.812 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:30:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:30:50.812 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:30:54 localhost systemd[1]: Reloading. Nov 28 04:30:54 localhost systemd-rc-local-generator[216231]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:54 localhost systemd-sysv-generator[216235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: Reloading. Nov 28 04:30:54 localhost systemd-rc-local-generator[216271]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:54 localhost systemd-sysv-generator[216276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd-logind[763]: Watching system buttons on /dev/input/event0 (Power Button) Nov 28 04:30:55 localhost systemd-logind[763]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 28 04:30:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:30:55 localhost lvm[216322]: PV /dev/loop4 online, VG ceph_vg1 is complete. Nov 28 04:30:55 localhost lvm[216322]: VG ceph_vg1 finished Nov 28 04:30:55 localhost lvm[216320]: PV /dev/loop3 online, VG ceph_vg0 is complete. Nov 28 04:30:55 localhost lvm[216320]: VG ceph_vg0 finished Nov 28 04:30:55 localhost podman[216321]: 2025-11-28 09:30:55.207388464 +0000 UTC m=+0.080409124 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:30:55 localhost podman[216321]: 2025-11-28 09:30:55.233384351 +0000 UTC m=+0.106405011 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:30:55 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:30:55 localhost systemd[1]: Started /usr/bin/systemctl start man-db-cache-update. Nov 28 04:30:55 localhost systemd[1]: Starting man-db-cache-update.service... Nov 28 04:30:55 localhost systemd[1]: Reloading. Nov 28 04:30:55 localhost systemd-sysv-generator[216398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:55 localhost systemd-rc-local-generator[216393]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:55 localhost systemd[1]: Queuing reload/restart jobs for marked units… Nov 28 04:30:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:30:56 localhost systemd[1]: man-db-cache-update.service: Deactivated successfully. Nov 28 04:30:56 localhost systemd[1]: Finished man-db-cache-update.service. Nov 28 04:30:56 localhost systemd[1]: man-db-cache-update.service: Consumed 1.149s CPU time. Nov 28 04:30:56 localhost systemd[1]: run-r70ea4d54fb0a412baff52c542a9c9d28.service: Deactivated successfully. Nov 28 04:30:56 localhost systemd[1]: tmp-crun.Azf98P.mount: Deactivated successfully. Nov 28 04:30:56 localhost podman[217535]: 2025-11-28 09:30:56.447363258 +0000 UTC m=+0.086458715 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:30:56 localhost podman[217535]: 2025-11-28 09:30:56.482364674 +0000 UTC m=+0.121460091 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:30:56 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:30:57 localhost python3.9[217660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:30:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31051 DF PROTO=TCP SPT=45572 DPT=9105 SEQ=2228635939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD09DC30000000001030307) Nov 28 04:30:58 localhost python3.9[217774]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:30:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31052 DF PROTO=TCP SPT=45572 DPT=9105 SEQ=2228635939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0A1BB0000000001030307) Nov 28 04:30:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26620 DF PROTO=TCP SPT=34866 DPT=9882 SEQ=4163958840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0A2EC0000000001030307) Nov 28 04:30:59 localhost python3.9[217884]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:30:59 localhost systemd[1]: Reloading. Nov 28 04:30:59 localhost systemd-sysv-generator[217909]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:30:59 localhost systemd-rc-local-generator[217906]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:30:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:00 localhost python3.9[218027]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:31:00 localhost network[218044]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:31:00 localhost network[218045]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:31:00 localhost network[218046]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:31:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26622 DF PROTO=TCP SPT=34866 DPT=9882 SEQ=4163958840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0AEFA0000000001030307) Nov 28 04:31:02 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:31:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31054 DF PROTO=TCP SPT=45572 DPT=9105 SEQ=2228635939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0B97A0000000001030307) Nov 28 04:31:05 localhost python3.9[218281]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:06 localhost python3.9[218392]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:07 localhost python3.9[218503]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:08 localhost python3.9[218614]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50658 DF PROTO=TCP SPT=57754 DPT=9102 SEQ=150800526 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0CAFA0000000001030307) Nov 28 04:31:09 localhost python3.9[218725]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:11 localhost python3.9[218836]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:12 localhost python3.9[218947]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31055 DF PROTO=TCP SPT=45572 DPT=9105 SEQ=2228635939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0D8FA0000000001030307) Nov 28 04:31:13 localhost python3.9[219058]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:31:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:31:13 localhost podman[219093]: 2025-11-28 09:31:13.98494426 +0000 UTC m=+0.089136427 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Nov 28 04:31:13 localhost podman[219093]: 2025-11-28 09:31:13.997682736 +0000 UTC m=+0.101874863 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:31:14 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:31:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1766 DF PROTO=TCP SPT=46146 DPT=9101 SEQ=1223127398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0DEC00000000001030307) Nov 28 04:31:14 localhost python3.9[219189]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:14 localhost python3.9[219299]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:15 localhost python3.9[219409]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:16 localhost python3.9[219519]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54607 DF PROTO=TCP SPT=59018 DPT=9100 SEQ=2143755290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0E7FB0000000001030307) Nov 28 04:31:16 localhost python3.9[219629]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:17 localhost python3.9[219739]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:18 localhost python3.9[219849]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:18 localhost python3.9[219959]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:19 localhost python3.9[220069]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:20 localhost python3.9[220179]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54608 DF PROTO=TCP SPT=59018 DPT=9100 SEQ=2143755290 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD0F7BA0000000001030307) Nov 28 04:31:20 localhost python3.9[220289]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:22 localhost python3.9[220399]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:22 localhost python3.9[220509]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:23 localhost python3.9[220619]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:24 localhost python3.9[220729]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:25 localhost python3.9[220839]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:31:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:31:25 localhost podman[220950]: 2025-11-28 09:31:25.93288713 +0000 UTC m=+0.083973498 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 04:31:25 localhost podman[220950]: 2025-11-28 09:31:25.972964475 +0000 UTC m=+0.124050903 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 28 04:31:25 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:31:26 localhost python3.9[220949]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:31:26 localhost python3.9[221086]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:31:26 localhost podman[221087]: 2025-11-28 09:31:26.963263375 +0000 UTC m=+0.076141605 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:31:26 localhost podman[221087]: 2025-11-28 09:31:26.968336462 +0000 UTC m=+0.081214693 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent) Nov 28 04:31:26 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:31:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48797 DF PROTO=TCP SPT=34422 DPT=9105 SEQ=603387921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD112F30000000001030307) Nov 28 04:31:27 localhost python3.9[221216]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:31:27 localhost systemd[1]: Reloading. Nov 28 04:31:28 localhost systemd-rc-local-generator[221241]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:31:28 localhost systemd-sysv-generator[221246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:31:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48798 DF PROTO=TCP SPT=34422 DPT=9105 SEQ=603387921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD116FA0000000001030307) Nov 28 04:31:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34434 DF PROTO=TCP SPT=36436 DPT=9882 SEQ=1351299922 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1181C0000000001030307) Nov 28 04:31:28 localhost python3.9[221361]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:29 localhost python3.9[221472]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:30 localhost python3.9[221583]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:30 localhost python3.9[221694]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3068 DF PROTO=TCP SPT=44944 DPT=9105 SEQ=1053215145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD122FA0000000001030307) Nov 28 04:31:32 localhost python3.9[221805]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:33 localhost python3.9[221916]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:34 localhost python3.9[222027]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48800 DF PROTO=TCP SPT=34422 DPT=9105 SEQ=603387921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD12EBA0000000001030307) Nov 28 04:31:35 localhost python3.9[222138]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:31:37 localhost python3.9[222249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:38 localhost python3.9[222359]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14237 DF PROTO=TCP SPT=44678 DPT=9102 SEQ=3061132303 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1403A0000000001030307) Nov 28 04:31:39 localhost python3.9[222469]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:39 localhost python3.9[222579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:40 localhost sshd[222597]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:31:40 localhost python3.9[222691]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:41 localhost python3.9[222801]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:41 localhost python3.9[222911]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:42 localhost python3.9[223021]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48801 DF PROTO=TCP SPT=34422 DPT=9105 SEQ=603387921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD14EFA0000000001030307) Nov 28 04:31:43 localhost python3.9[223131]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:43 localhost python3.9[223241]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35641 DF PROTO=TCP SPT=42412 DPT=9101 SEQ=267168259 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD153F00000000001030307) Nov 28 04:31:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:31:44 localhost podman[223259]: 2025-11-28 09:31:44.993344495 +0000 UTC m=+0.094640199 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd) Nov 28 04:31:45 localhost podman[223259]: 2025-11-28 09:31:45.035627718 +0000 UTC m=+0.136923472 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:31:45 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:31:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10326 DF PROTO=TCP SPT=45948 DPT=9100 SEQ=3223047178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD15D3A0000000001030307) Nov 28 04:31:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10327 DF PROTO=TCP SPT=45948 DPT=9100 SEQ=3223047178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD16CFA0000000001030307) Nov 28 04:31:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:31:50.811 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:31:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:31:50.812 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:31:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:31:50.812 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:31:51 localhost python3.9[223513]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Nov 28 04:31:51 localhost python3.9[223624]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 28 04:31:53 localhost python3.9[223740]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005538515.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Nov 28 04:31:54 localhost sshd[223766]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:31:54 localhost systemd-logind[763]: New session 54 of user zuul. Nov 28 04:31:54 localhost systemd[1]: Started Session 54 of User zuul. Nov 28 04:31:54 localhost systemd[1]: session-54.scope: Deactivated successfully. Nov 28 04:31:54 localhost systemd-logind[763]: Session 54 logged out. Waiting for processes to exit. Nov 28 04:31:54 localhost systemd-logind[763]: Removed session 54. Nov 28 04:31:55 localhost python3.9[223877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:31:55 localhost python3.9[223963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322314.5895195-3391-223551265961520/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:56 localhost python3.9[224071]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:31:56 localhost python3.9[224126]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:31:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:31:57 localhost podman[224144]: 2025-11-28 09:31:57.022662359 +0000 UTC m=+0.113123192 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 04:31:57 localhost podman[224144]: 2025-11-28 09:31:57.103322304 +0000 UTC m=+0.193783117 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:31:57 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:31:57 localhost podman[224199]: 2025-11-28 09:31:57.155128891 +0000 UTC m=+0.123678910 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 28 04:31:57 localhost podman[224199]: 2025-11-28 09:31:57.185670379 +0000 UTC m=+0.154220398 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:31:57 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:31:57 localhost python3.9[224278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:31:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3401 DF PROTO=TCP SPT=33760 DPT=9105 SEQ=3839620824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD188220000000001030307) Nov 28 04:31:57 localhost python3.9[224364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322316.9653697-3391-101791380511549/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3402 DF PROTO=TCP SPT=33760 DPT=9105 SEQ=3839620824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD18C3A0000000001030307) Nov 28 04:31:58 localhost python3.9[224472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:31:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10328 DF PROTO=TCP SPT=45948 DPT=9100 SEQ=3223047178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD18CFA0000000001030307) Nov 28 04:31:59 localhost python3.9[224558]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322318.1478288-3391-31584114663146/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=ea203e550d6f82354ff814f038f2bcabd98eed86 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:31:59 localhost python3.9[224666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:32:01 localhost python3.9[224752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322319.262768-3391-142197283468942/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:32:01 localhost python3.9[224860]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:32:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64744 DF PROTO=TCP SPT=51316 DPT=9882 SEQ=2403100779 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1993A0000000001030307) Nov 28 04:32:03 localhost python3.9[224946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322321.4034064-3391-133399573480838/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:32:04 localhost python3.9[225056]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:32:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3404 DF PROTO=TCP SPT=33760 DPT=9105 SEQ=3839620824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1A3FB0000000001030307) Nov 28 04:32:05 localhost python3.9[225166]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:32:05 localhost python3.9[225276]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:06 localhost python3.9[225388]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:32:07 localhost python3.9[225496]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:08 localhost python3.9[225606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:32:08 localhost python3.9[225692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322327.572156-3767-247361820802228/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:32:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33309 DF PROTO=TCP SPT=55346 DPT=9102 SEQ=777400748 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1B57B0000000001030307) Nov 28 04:32:09 localhost python3.9[225800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:32:09 localhost python3.9[225886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322328.8346283-3810-206243369475966/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:32:10 localhost python3.9[225996]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Nov 28 04:32:11 localhost python3.9[226106]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:32:12 localhost python3[226216]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:32:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3405 DF PROTO=TCP SPT=33760 DPT=9105 SEQ=3839620824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1C4FA0000000001030307) Nov 28 04:32:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64746 DF PROTO=TCP SPT=51316 DPT=9882 SEQ=2403100779 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1C8FB0000000001030307) Nov 28 04:32:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:32:15 localhost podman[226242]: 2025-11-28 09:32:15.951241298 +0000 UTC m=+0.061676693 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 04:32:15 localhost podman[226242]: 2025-11-28 09:32:15.960268799 +0000 UTC m=+0.070704204 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 04:32:15 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:32:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7859 DF PROTO=TCP SPT=59650 DPT=9100 SEQ=275072454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1D23A0000000001030307) Nov 28 04:32:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7860 DF PROTO=TCP SPT=59650 DPT=9100 SEQ=275072454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1E1FB0000000001030307) Nov 28 04:32:23 localhost podman[226229]: 2025-11-28 09:32:12.555855097 +0000 UTC m=+0.047273458 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Nov 28 04:32:23 localhost podman[226309]: Nov 28 04:32:23 localhost podman[226309]: 2025-11-28 09:32:23.299556215 +0000 UTC m=+0.142311863 container create acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3) Nov 28 04:32:23 localhost podman[226309]: 2025-11-28 09:32:23.206246266 +0000 UTC m=+0.049001944 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Nov 28 04:32:23 localhost python3[226216]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init Nov 28 04:32:24 localhost python3.9[226457]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:25 localhost python3.9[226569]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Nov 28 04:32:26 localhost python3.9[226679]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:32:27 localhost python3[226789]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:32:27 localhost python3[226789]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a",#012 "Digest": "sha256:647f1d5dc1b70ffa3e1832199619d57bfaeceac8823ff53ece64b8e42cc9688e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:647f1d5dc1b70ffa3e1832199619d57bfaeceac8823ff53ece64b8e42cc9688e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:36:07.10279245Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211782527,#012 "VirtualSize": 1211782527,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309/diff:/var/lib/containers/storage/overlay/f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a/diff:/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:c136b33417f134a3b932677bcf7a2df089c29f20eca250129eafd2132d4708bb",#012 "sha256:7913bde445307e7f24767d9149b2e7f498930793ac9f073ccec69b608c009d31",#012 "sha256:084b2323a717fe711217b0ec21da61f4804f7a0d506adae935888421b80809cf"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Nov 28 04:32:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:32:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:32:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25472 DF PROTO=TCP SPT=59640 DPT=9105 SEQ=2968183367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD1FD530000000001030307) Nov 28 04:32:27 localhost podman[226838]: 2025-11-28 09:32:27.602124147 +0000 UTC m=+0.125591405 container remove ae9e8db2a854119ed082566caf78eaf6c9d895e78df0b69420b5a74f73b528b0 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': '18a2751501986164e709168f53ab57c8-bbb5ea37891e3118676a78b59837de90'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute) Nov 28 04:32:27 localhost python3[226789]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute Nov 28 04:32:27 localhost systemd[1]: tmp-crun.M7P2if.mount: Deactivated successfully. Nov 28 04:32:27 localhost podman[226851]: 2025-11-28 09:32:27.709437761 +0000 UTC m=+0.144695266 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:32:27 localhost podman[226850]: 2025-11-28 09:32:27.687979747 +0000 UTC m=+0.125290007 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:32:27 localhost podman[226851]: 2025-11-28 09:32:27.743516825 +0000 UTC m=+0.178774280 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:32:27 localhost podman[226875]: Nov 28 04:32:27 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:32:27 localhost podman[226850]: 2025-11-28 09:32:27.772607464 +0000 UTC m=+0.209917724 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true) Nov 28 04:32:27 localhost podman[226875]: 2025-11-28 09:32:27.775489671 +0000 UTC m=+0.147451411 container create 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:32:27 localhost podman[226875]: 2025-11-28 09:32:27.731392149 +0000 UTC m=+0.103353939 image pull quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified Nov 28 04:32:27 localhost python3[226789]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start Nov 28 04:32:27 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:32:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25473 DF PROTO=TCP SPT=59640 DPT=9105 SEQ=2968183367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2017A0000000001030307) Nov 28 04:32:28 localhost python3.9[227040]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42259 DF PROTO=TCP SPT=58618 DPT=9882 SEQ=1004386776 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2027C0000000001030307) Nov 28 04:32:29 localhost python3.9[227152]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:32:30 localhost python3.9[227263]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322349.7524283-4086-230659389946330/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:32:30 localhost python3.9[227318]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:32:30 localhost systemd[1]: Reloading. Nov 28 04:32:31 localhost systemd-sysv-generator[227352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:32:31 localhost systemd-rc-local-generator[227349]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48803 DF PROTO=TCP SPT=34422 DPT=9105 SEQ=603387921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD20CFA0000000001030307) Nov 28 04:32:31 localhost python3.9[227408]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:32:31 localhost systemd[1]: Reloading. Nov 28 04:32:32 localhost systemd-rc-local-generator[227439]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:32:32 localhost systemd-sysv-generator[227442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:32 localhost systemd[1]: Starting nova_compute container... Nov 28 04:32:32 localhost systemd[1]: Started libcrun container. Nov 28 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:32 localhost podman[227450]: 2025-11-28 09:32:32.419234432 +0000 UTC m=+0.142982231 container init 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:32:32 localhost podman[227450]: 2025-11-28 09:32:32.429827155 +0000 UTC m=+0.153574954 container start 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:32:32 localhost podman[227450]: nova_compute Nov 28 04:32:32 localhost nova_compute[227465]: + sudo -E kolla_set_configs Nov 28 04:32:32 localhost systemd[1]: Started nova_compute container. Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Validating config file Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying service configuration files Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Deleting /etc/nova/nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/nova/nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Deleting /etc/ceph Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Creating directory /etc/ceph Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/ceph Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Deleting /usr/sbin/iscsiadm Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Writing out command to execute Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:32 localhost nova_compute[227465]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 28 04:32:32 localhost nova_compute[227465]: ++ cat /run_command Nov 28 04:32:32 localhost nova_compute[227465]: + CMD=nova-compute Nov 28 04:32:32 localhost nova_compute[227465]: + ARGS= Nov 28 04:32:32 localhost nova_compute[227465]: + sudo kolla_copy_cacerts Nov 28 04:32:32 localhost nova_compute[227465]: + [[ ! -n '' ]] Nov 28 04:32:32 localhost nova_compute[227465]: + . kolla_extend_start Nov 28 04:32:32 localhost nova_compute[227465]: Running command: 'nova-compute' Nov 28 04:32:32 localhost nova_compute[227465]: + echo 'Running command: '\''nova-compute'\''' Nov 28 04:32:32 localhost nova_compute[227465]: + umask 0022 Nov 28 04:32:32 localhost nova_compute[227465]: + exec nova-compute Nov 28 04:32:33 localhost python3.9[227585]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.205 227469 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.206 227469 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.206 227469 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.206 227469 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.322 227469 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.344 227469 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.344 227469 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Nov 28 04:32:34 localhost python3.9[227695]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.711 227469 INFO nova.virt.driver [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Nov 28 04:32:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25475 DF PROTO=TCP SPT=59640 DPT=9105 SEQ=2968183367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2193A0000000001030307) Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.836 227469 INFO nova.compute.provider_config [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.858 227469 WARNING nova.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.858 227469 DEBUG oslo_concurrency.lockutils [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.859 227469 DEBUG oslo_concurrency.lockutils [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.859 227469 DEBUG oslo_concurrency.lockutils [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.859 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.859 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.860 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.860 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.860 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.860 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.860 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.860 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.861 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] console_host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.862 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.863 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.864 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.865 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.865 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.865 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.865 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.865 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.865 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.866 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.867 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.868 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.869 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.870 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.871 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.872 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.873 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.874 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.875 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.876 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.877 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.878 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.879 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.879 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.879 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.879 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.879 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.879 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.880 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.881 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.882 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.883 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.884 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.885 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.886 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.887 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.888 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.889 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.890 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.891 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.892 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.893 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.894 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.895 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.896 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.897 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.898 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.899 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.900 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.901 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.902 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.903 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.904 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.905 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.906 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.907 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.907 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.907 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.907 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.907 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.908 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.908 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.908 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.908 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.908 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.908 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.909 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.910 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.911 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.911 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.911 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.911 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.911 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.911 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.912 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.913 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.914 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.915 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.915 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.915 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.915 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.915 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.916 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.917 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.918 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.919 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.920 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.921 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.922 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.922 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.922 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.922 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.922 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.922 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.923 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.924 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.924 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.924 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.924 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.924 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.924 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.925 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.926 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.926 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.926 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.926 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.926 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.926 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.927 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.927 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.927 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.927 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.927 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.927 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 WARNING oslo_config.cfg [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Nov 28 04:32:34 localhost nova_compute[227465]: live_migration_uri is deprecated for removal in favor of two other options that Nov 28 04:32:34 localhost nova_compute[227465]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Nov 28 04:32:34 localhost nova_compute[227465]: and ``live_migration_inbound_addr`` respectively. Nov 28 04:32:34 localhost nova_compute[227465]: ). Its value may be silently ignored in the future.#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.928 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.929 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.930 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.930 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.930 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.930 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.930 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.930 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rbd_secret_uuid = 2c5417c9-00eb-57d5-a565-ddecbc7995c1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.931 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.932 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.932 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.932 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.932 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.932 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.932 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.933 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.933 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.933 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.933 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.933 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.933 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.934 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.934 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.934 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.934 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.934 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.934 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.935 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.936 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.937 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.938 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.939 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.940 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.940 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.940 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.940 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.940 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.940 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.941 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.942 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.943 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.944 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.945 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.946 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.947 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.948 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.948 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.948 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.948 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.948 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.948 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.949 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.950 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.951 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.952 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.953 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.954 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.954 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.954 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.954 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.954 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.955 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.956 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.956 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.956 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.956 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.956 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.956 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.957 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.958 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.959 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.960 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.961 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.962 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.963 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.963 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.963 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.963 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.963 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.963 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.964 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.965 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.966 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.967 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.968 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.968 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.968 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.968 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.968 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.968 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.969 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.970 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.971 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.972 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.973 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.974 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.975 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.976 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.976 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.976 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.976 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.976 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.976 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.977 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.978 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.979 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.980 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.981 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.982 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.983 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.984 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.984 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.984 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.984 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.984 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.984 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.985 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.986 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.987 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.988 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.988 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.988 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.988 227469 DEBUG oslo_service.service [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 28 04:32:34 localhost nova_compute[227465]: 2025-11-28 09:32:34.989 227469 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.007 227469 INFO nova.virt.node [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Determined node identity 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from /var/lib/nova/compute_id#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.007 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.008 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.008 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.008 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Nov 28 04:32:35 localhost systemd[1]: Started libvirt QEMU daemon. Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.069 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.071 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.072 227469 INFO nova.virt.libvirt.driver [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Connection event '1' reason 'None'#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.084 227469 DEBUG nova.virt.libvirt.volume.mount [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.986 227469 INFO nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Libvirt host capabilities Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: 4c358f0e-7e15-44e5-bde2-714780d05a92 Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: x86_64 Nov 28 04:32:35 localhost nova_compute[227465]: EPYC-Rome-v4 Nov 28 04:32:35 localhost nova_compute[227465]: AMD Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: tcp Nov 28 04:32:35 localhost nova_compute[227465]: rdma Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: 16116612 Nov 28 04:32:35 localhost nova_compute[227465]: 4029153 Nov 28 04:32:35 localhost nova_compute[227465]: 0 Nov 28 04:32:35 localhost nova_compute[227465]: 0 Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: selinux Nov 28 04:32:35 localhost nova_compute[227465]: 0 Nov 28 04:32:35 localhost nova_compute[227465]: system_u:system_r:svirt_t:s0 Nov 28 04:32:35 localhost nova_compute[227465]: system_u:system_r:svirt_tcg_t:s0 Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: dac Nov 28 04:32:35 localhost nova_compute[227465]: 0 Nov 28 04:32:35 localhost nova_compute[227465]: +107:+107 Nov 28 04:32:35 localhost nova_compute[227465]: +107:+107 Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: hvm Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: 32 Nov 28 04:32:35 localhost nova_compute[227465]: /usr/libexec/qemu-kvm Nov 28 04:32:35 localhost nova_compute[227465]: pc-i440fx-rhel7.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.8.0 Nov 28 04:32:35 localhost nova_compute[227465]: q35 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.4.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.5.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.3.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel7.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.4.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.2.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.2.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.0.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.0.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.1.0 Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: hvm Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: 64 Nov 28 04:32:35 localhost nova_compute[227465]: /usr/libexec/qemu-kvm Nov 28 04:32:35 localhost nova_compute[227465]: pc-i440fx-rhel7.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.8.0 Nov 28 04:32:35 localhost nova_compute[227465]: q35 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.4.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.5.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.3.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel7.6.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.4.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.2.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.2.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel9.0.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.0.0 Nov 28 04:32:35 localhost nova_compute[227465]: pc-q35-rhel8.1.0 Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: Nov 28 04:32:35 localhost nova_compute[227465]: #033[00m Nov 28 04:32:35 localhost nova_compute[227465]: 2025-11-28 09:32:35.996 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.020 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/libexec/qemu-kvm Nov 28 04:32:36 localhost nova_compute[227465]: kvm Nov 28 04:32:36 localhost nova_compute[227465]: pc-q35-rhel9.8.0 Nov 28 04:32:36 localhost nova_compute[227465]: i686 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: rom Nov 28 04:32:36 localhost nova_compute[227465]: pflash Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: yes Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: AMD Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 486 Nov 28 04:32:36 localhost nova_compute[227465]: 486-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Conroe Nov 28 04:32:36 localhost nova_compute[227465]: Conroe-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-IBPB Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v4 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v1 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v2 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v6 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v7 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Penryn Nov 28 04:32:36 localhost nova_compute[227465]: Penryn-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Westmere Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v2 Nov 28 04:32:36 localhost nova_compute[227465]: athlon Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: athlon-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: kvm32 Nov 28 04:32:36 localhost nova_compute[227465]: kvm32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: n270 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: n270-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pentium Nov 28 04:32:36 localhost nova_compute[227465]: pentium-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: phenom Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: phenom-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu32 Nov 28 04:32:36 localhost nova_compute[227465]: qemu32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: anonymous Nov 28 04:32:36 localhost nova_compute[227465]: memfd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: disk Nov 28 04:32:36 localhost nova_compute[227465]: cdrom Nov 28 04:32:36 localhost nova_compute[227465]: floppy Nov 28 04:32:36 localhost nova_compute[227465]: lun Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: fdc Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: sata Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: vnc Nov 28 04:32:36 localhost nova_compute[227465]: egl-headless Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: subsystem Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: mandatory Nov 28 04:32:36 localhost nova_compute[227465]: requisite Nov 28 04:32:36 localhost nova_compute[227465]: optional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: pci Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: random Nov 28 04:32:36 localhost nova_compute[227465]: egd Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: path Nov 28 04:32:36 localhost nova_compute[227465]: handle Nov 28 04:32:36 localhost nova_compute[227465]: virtiofs Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tpm-tis Nov 28 04:32:36 localhost nova_compute[227465]: tpm-crb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: emulator Nov 28 04:32:36 localhost nova_compute[227465]: external Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 2.0 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: passt Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: isa Nov 28 04:32:36 localhost nova_compute[227465]: hyperv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: null Nov 28 04:32:36 localhost nova_compute[227465]: vc Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: dev Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: pipe Nov 28 04:32:36 localhost nova_compute[227465]: stdio Nov 28 04:32:36 localhost nova_compute[227465]: udp Nov 28 04:32:36 localhost nova_compute[227465]: tcp Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: qemu-vdagent Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: relaxed Nov 28 04:32:36 localhost nova_compute[227465]: vapic Nov 28 04:32:36 localhost nova_compute[227465]: spinlocks Nov 28 04:32:36 localhost nova_compute[227465]: vpindex Nov 28 04:32:36 localhost nova_compute[227465]: runtime Nov 28 04:32:36 localhost nova_compute[227465]: synic Nov 28 04:32:36 localhost nova_compute[227465]: stimer Nov 28 04:32:36 localhost nova_compute[227465]: reset Nov 28 04:32:36 localhost nova_compute[227465]: vendor_id Nov 28 04:32:36 localhost nova_compute[227465]: frequencies Nov 28 04:32:36 localhost nova_compute[227465]: reenlightenment Nov 28 04:32:36 localhost nova_compute[227465]: tlbflush Nov 28 04:32:36 localhost nova_compute[227465]: ipi Nov 28 04:32:36 localhost nova_compute[227465]: avic Nov 28 04:32:36 localhost nova_compute[227465]: emsr_bitmap Nov 28 04:32:36 localhost nova_compute[227465]: xmm_input Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 4095 Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Linux KVM Hv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tdx Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.028 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/libexec/qemu-kvm Nov 28 04:32:36 localhost nova_compute[227465]: kvm Nov 28 04:32:36 localhost nova_compute[227465]: pc-i440fx-rhel7.6.0 Nov 28 04:32:36 localhost nova_compute[227465]: i686 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: rom Nov 28 04:32:36 localhost nova_compute[227465]: pflash Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: yes Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: AMD Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 486 Nov 28 04:32:36 localhost nova_compute[227465]: 486-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Conroe Nov 28 04:32:36 localhost nova_compute[227465]: Conroe-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-IBPB Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v4 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v1 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v2 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v6 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v7 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Penryn Nov 28 04:32:36 localhost nova_compute[227465]: Penryn-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Westmere Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v2 Nov 28 04:32:36 localhost nova_compute[227465]: athlon Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: athlon-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: kvm32 Nov 28 04:32:36 localhost nova_compute[227465]: kvm32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: n270 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: n270-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pentium Nov 28 04:32:36 localhost nova_compute[227465]: pentium-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: phenom Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: phenom-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu32 Nov 28 04:32:36 localhost nova_compute[227465]: qemu32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: anonymous Nov 28 04:32:36 localhost nova_compute[227465]: memfd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: disk Nov 28 04:32:36 localhost nova_compute[227465]: cdrom Nov 28 04:32:36 localhost nova_compute[227465]: floppy Nov 28 04:32:36 localhost nova_compute[227465]: lun Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: ide Nov 28 04:32:36 localhost nova_compute[227465]: fdc Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: sata Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: vnc Nov 28 04:32:36 localhost nova_compute[227465]: egl-headless Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: subsystem Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: mandatory Nov 28 04:32:36 localhost nova_compute[227465]: requisite Nov 28 04:32:36 localhost nova_compute[227465]: optional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: pci Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: random Nov 28 04:32:36 localhost nova_compute[227465]: egd Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: path Nov 28 04:32:36 localhost nova_compute[227465]: handle Nov 28 04:32:36 localhost nova_compute[227465]: virtiofs Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tpm-tis Nov 28 04:32:36 localhost nova_compute[227465]: tpm-crb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: emulator Nov 28 04:32:36 localhost nova_compute[227465]: external Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 2.0 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: passt Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: isa Nov 28 04:32:36 localhost nova_compute[227465]: hyperv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: null Nov 28 04:32:36 localhost nova_compute[227465]: vc Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: dev Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: pipe Nov 28 04:32:36 localhost nova_compute[227465]: stdio Nov 28 04:32:36 localhost nova_compute[227465]: udp Nov 28 04:32:36 localhost nova_compute[227465]: tcp Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: qemu-vdagent Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: relaxed Nov 28 04:32:36 localhost nova_compute[227465]: vapic Nov 28 04:32:36 localhost nova_compute[227465]: spinlocks Nov 28 04:32:36 localhost nova_compute[227465]: vpindex Nov 28 04:32:36 localhost nova_compute[227465]: runtime Nov 28 04:32:36 localhost nova_compute[227465]: synic Nov 28 04:32:36 localhost nova_compute[227465]: stimer Nov 28 04:32:36 localhost nova_compute[227465]: reset Nov 28 04:32:36 localhost nova_compute[227465]: vendor_id Nov 28 04:32:36 localhost nova_compute[227465]: frequencies Nov 28 04:32:36 localhost nova_compute[227465]: reenlightenment Nov 28 04:32:36 localhost nova_compute[227465]: tlbflush Nov 28 04:32:36 localhost nova_compute[227465]: ipi Nov 28 04:32:36 localhost nova_compute[227465]: avic Nov 28 04:32:36 localhost nova_compute[227465]: emsr_bitmap Nov 28 04:32:36 localhost nova_compute[227465]: xmm_input Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 4095 Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Linux KVM Hv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tdx Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.074 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.080 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/libexec/qemu-kvm Nov 28 04:32:36 localhost nova_compute[227465]: kvm Nov 28 04:32:36 localhost nova_compute[227465]: pc-q35-rhel9.8.0 Nov 28 04:32:36 localhost nova_compute[227465]: x86_64 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: efi Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/edk2/ovmf/OVMF_CODE.fd Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: rom Nov 28 04:32:36 localhost nova_compute[227465]: pflash Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: yes Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: yes Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: AMD Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 486 Nov 28 04:32:36 localhost nova_compute[227465]: 486-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Conroe Nov 28 04:32:36 localhost nova_compute[227465]: Conroe-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-IBPB Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v4 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v1 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v2 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v6 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v7 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Penryn Nov 28 04:32:36 localhost nova_compute[227465]: Penryn-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Westmere Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v2 Nov 28 04:32:36 localhost nova_compute[227465]: athlon Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: athlon-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: kvm32 Nov 28 04:32:36 localhost nova_compute[227465]: kvm32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: n270 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: n270-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pentium Nov 28 04:32:36 localhost nova_compute[227465]: pentium-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: phenom Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: phenom-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu32 Nov 28 04:32:36 localhost nova_compute[227465]: qemu32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: anonymous Nov 28 04:32:36 localhost nova_compute[227465]: memfd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: disk Nov 28 04:32:36 localhost nova_compute[227465]: cdrom Nov 28 04:32:36 localhost nova_compute[227465]: floppy Nov 28 04:32:36 localhost nova_compute[227465]: lun Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: fdc Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: sata Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: vnc Nov 28 04:32:36 localhost nova_compute[227465]: egl-headless Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: subsystem Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: mandatory Nov 28 04:32:36 localhost nova_compute[227465]: requisite Nov 28 04:32:36 localhost nova_compute[227465]: optional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: pci Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: random Nov 28 04:32:36 localhost nova_compute[227465]: egd Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: path Nov 28 04:32:36 localhost nova_compute[227465]: handle Nov 28 04:32:36 localhost nova_compute[227465]: virtiofs Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tpm-tis Nov 28 04:32:36 localhost nova_compute[227465]: tpm-crb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: emulator Nov 28 04:32:36 localhost nova_compute[227465]: external Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 2.0 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: passt Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: isa Nov 28 04:32:36 localhost nova_compute[227465]: hyperv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: null Nov 28 04:32:36 localhost nova_compute[227465]: vc Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: dev Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: pipe Nov 28 04:32:36 localhost nova_compute[227465]: stdio Nov 28 04:32:36 localhost nova_compute[227465]: udp Nov 28 04:32:36 localhost nova_compute[227465]: tcp Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: qemu-vdagent Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: relaxed Nov 28 04:32:36 localhost nova_compute[227465]: vapic Nov 28 04:32:36 localhost nova_compute[227465]: spinlocks Nov 28 04:32:36 localhost nova_compute[227465]: vpindex Nov 28 04:32:36 localhost nova_compute[227465]: runtime Nov 28 04:32:36 localhost nova_compute[227465]: synic Nov 28 04:32:36 localhost nova_compute[227465]: stimer Nov 28 04:32:36 localhost nova_compute[227465]: reset Nov 28 04:32:36 localhost nova_compute[227465]: vendor_id Nov 28 04:32:36 localhost nova_compute[227465]: frequencies Nov 28 04:32:36 localhost nova_compute[227465]: reenlightenment Nov 28 04:32:36 localhost nova_compute[227465]: tlbflush Nov 28 04:32:36 localhost nova_compute[227465]: ipi Nov 28 04:32:36 localhost nova_compute[227465]: avic Nov 28 04:32:36 localhost nova_compute[227465]: emsr_bitmap Nov 28 04:32:36 localhost nova_compute[227465]: xmm_input Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 4095 Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Linux KVM Hv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tdx Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.127 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/libexec/qemu-kvm Nov 28 04:32:36 localhost nova_compute[227465]: kvm Nov 28 04:32:36 localhost nova_compute[227465]: pc-i440fx-rhel7.6.0 Nov 28 04:32:36 localhost nova_compute[227465]: x86_64 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: rom Nov 28 04:32:36 localhost nova_compute[227465]: pflash Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: yes Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: no Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: AMD Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 486 Nov 28 04:32:36 localhost nova_compute[227465]: 486-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Broadwell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cascadelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Conroe Nov 28 04:32:36 localhost nova_compute[227465]: Conroe-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Cooperlake-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Denverton-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Dhyana-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Genoa-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-IBPB Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Milan-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-Rome-v4 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v1 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v2 Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: EPYC-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: GraniteRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Haswell-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-noTSX Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v6 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Icelake-Server-v7 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: IvyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: KnightsMill-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nehalem-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G1-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G4-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Opteron_G5-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Penryn Nov 28 04:32:36 localhost nova_compute[227465]: Penryn-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: SandyBridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SapphireRapids-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: SierraForest-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Client-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-noTSX-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Skylake-Server-v5 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v2 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v3 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Snowridge-v4 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Westmere Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-IBRS Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Westmere-v2 Nov 28 04:32:36 localhost nova_compute[227465]: athlon Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: athlon-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: core2duo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: coreduo-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: kvm32 Nov 28 04:32:36 localhost nova_compute[227465]: kvm32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64 Nov 28 04:32:36 localhost nova_compute[227465]: kvm64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: n270 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: n270-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pentium Nov 28 04:32:36 localhost nova_compute[227465]: pentium-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2 Nov 28 04:32:36 localhost nova_compute[227465]: pentium2-v1 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3 Nov 28 04:32:36 localhost nova_compute[227465]: pentium3-v1 Nov 28 04:32:36 localhost nova_compute[227465]: phenom Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: phenom-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu32 Nov 28 04:32:36 localhost nova_compute[227465]: qemu32-v1 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64 Nov 28 04:32:36 localhost nova_compute[227465]: qemu64-v1 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: anonymous Nov 28 04:32:36 localhost nova_compute[227465]: memfd Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: disk Nov 28 04:32:36 localhost nova_compute[227465]: cdrom Nov 28 04:32:36 localhost nova_compute[227465]: floppy Nov 28 04:32:36 localhost nova_compute[227465]: lun Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: ide Nov 28 04:32:36 localhost nova_compute[227465]: fdc Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: sata Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: vnc Nov 28 04:32:36 localhost nova_compute[227465]: egl-headless Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: subsystem Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: mandatory Nov 28 04:32:36 localhost nova_compute[227465]: requisite Nov 28 04:32:36 localhost nova_compute[227465]: optional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: pci Nov 28 04:32:36 localhost nova_compute[227465]: scsi Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: virtio Nov 28 04:32:36 localhost nova_compute[227465]: virtio-transitional Nov 28 04:32:36 localhost nova_compute[227465]: virtio-non-transitional Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: random Nov 28 04:32:36 localhost nova_compute[227465]: egd Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: path Nov 28 04:32:36 localhost nova_compute[227465]: handle Nov 28 04:32:36 localhost nova_compute[227465]: virtiofs Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tpm-tis Nov 28 04:32:36 localhost nova_compute[227465]: tpm-crb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: emulator Nov 28 04:32:36 localhost nova_compute[227465]: external Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 2.0 Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: usb Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: qemu Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: builtin Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: default Nov 28 04:32:36 localhost nova_compute[227465]: passt Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: isa Nov 28 04:32:36 localhost nova_compute[227465]: hyperv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: null Nov 28 04:32:36 localhost nova_compute[227465]: vc Nov 28 04:32:36 localhost nova_compute[227465]: pty Nov 28 04:32:36 localhost nova_compute[227465]: dev Nov 28 04:32:36 localhost nova_compute[227465]: file Nov 28 04:32:36 localhost nova_compute[227465]: pipe Nov 28 04:32:36 localhost nova_compute[227465]: stdio Nov 28 04:32:36 localhost nova_compute[227465]: udp Nov 28 04:32:36 localhost nova_compute[227465]: tcp Nov 28 04:32:36 localhost nova_compute[227465]: unix Nov 28 04:32:36 localhost nova_compute[227465]: qemu-vdagent Nov 28 04:32:36 localhost nova_compute[227465]: dbus Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: relaxed Nov 28 04:32:36 localhost nova_compute[227465]: vapic Nov 28 04:32:36 localhost nova_compute[227465]: spinlocks Nov 28 04:32:36 localhost nova_compute[227465]: vpindex Nov 28 04:32:36 localhost nova_compute[227465]: runtime Nov 28 04:32:36 localhost nova_compute[227465]: synic Nov 28 04:32:36 localhost nova_compute[227465]: stimer Nov 28 04:32:36 localhost nova_compute[227465]: reset Nov 28 04:32:36 localhost nova_compute[227465]: vendor_id Nov 28 04:32:36 localhost nova_compute[227465]: frequencies Nov 28 04:32:36 localhost nova_compute[227465]: reenlightenment Nov 28 04:32:36 localhost nova_compute[227465]: tlbflush Nov 28 04:32:36 localhost nova_compute[227465]: ipi Nov 28 04:32:36 localhost nova_compute[227465]: avic Nov 28 04:32:36 localhost nova_compute[227465]: emsr_bitmap Nov 28 04:32:36 localhost nova_compute[227465]: xmm_input Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: 4095 Nov 28 04:32:36 localhost nova_compute[227465]: on Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: off Nov 28 04:32:36 localhost nova_compute[227465]: Linux KVM Hv Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: tdx Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: Nov 28 04:32:36 localhost nova_compute[227465]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.167 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.167 227469 INFO nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Secure Boot support detected#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.170 227469 INFO nova.virt.libvirt.driver [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.170 227469 INFO nova.virt.libvirt.driver [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.181 227469 DEBUG nova.virt.libvirt.driver [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.196 227469 INFO nova.virt.node [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Determined node identity 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from /var/lib/nova/compute_id#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.211 227469 DEBUG nova.compute.manager [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Verified node 72fba1ca-0d86-48af-8a3d-510284dfd0e0 matches my host np0005538515.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.229 227469 INFO nova.compute.manager [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.666 227469 INFO nova.service [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Updating service version for nova-compute on np0005538515.localdomain from 57 to 66#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.697 227469 DEBUG oslo_concurrency.lockutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.697 227469 DEBUG oslo_concurrency.lockutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.697 227469 DEBUG oslo_concurrency.lockutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.697 227469 DEBUG nova.compute.resource_tracker [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:32:36 localhost nova_compute[227465]: 2025-11-28 09:32:36.698 227469 DEBUG oslo_concurrency.processutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.135 227469 DEBUG oslo_concurrency.processutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:32:37 localhost systemd[1]: Started libvirt nodedev daemon. Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.454 227469 WARNING nova.virt.libvirt.driver [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.457 227469 DEBUG nova.compute.resource_tracker [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13629MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.457 227469 DEBUG oslo_concurrency.lockutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.458 227469 DEBUG oslo_concurrency.lockutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.576 227469 DEBUG nova.compute.resource_tracker [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.577 227469 DEBUG nova.compute.resource_tracker [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.638 227469 DEBUG nova.scheduler.client.report [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.669 227469 DEBUG nova.scheduler.client.report [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.670 227469 DEBUG nova.compute.provider_tree [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.684 227469 DEBUG nova.scheduler.client.report [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.704 227469 DEBUG nova.scheduler.client.report [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: HW_CPU_X86_SSE4A,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_CLMUL,HW_CPU_X86_ABM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_F16C _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:32:37 localhost nova_compute[227465]: 2025-11-28 09:32:37.726 227469 DEBUG oslo_concurrency.processutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.198 227469 DEBUG oslo_concurrency.processutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.204 227469 DEBUG nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Nov 28 04:32:38 localhost nova_compute[227465]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.205 227469 INFO nova.virt.libvirt.host [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] kernel doesn't support AMD SEV#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.207 227469 DEBUG nova.compute.provider_tree [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.207 227469 DEBUG nova.virt.libvirt.driver [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.233 227469 DEBUG nova.scheduler.client.report [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.303 227469 DEBUG nova.compute.provider_tree [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Updating resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.336 227469 DEBUG nova.compute.resource_tracker [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.337 227469 DEBUG oslo_concurrency.lockutils [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.338 227469 DEBUG nova.service [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.387 227469 DEBUG nova.service [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Nov 28 04:32:38 localhost nova_compute[227465]: 2025-11-28 09:32:38.388 227469 DEBUG nova.servicegroup.drivers.db [None req-b811f39d-3152-4ae0-8dd6-76baa64b5d28 - - - - - -] DB_Driver: join new ServiceGroup member np0005538515.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Nov 28 04:32:38 localhost python3.9[228190]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:32:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29098 DF PROTO=TCP SPT=54226 DPT=9102 SEQ=3263504921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD22A7A0000000001030307) Nov 28 04:32:40 localhost python3.9[228302]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 28 04:32:40 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 121.3 (404 of 333 items), suggesting rotation. Nov 28 04:32:40 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:32:40 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:32:40 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:32:41 localhost python3.9[228436]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:32:41 localhost systemd[1]: Stopping nova_compute container... Nov 28 04:32:42 localhost nova_compute[227465]: 2025-11-28 09:32:42.510 227469 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Nov 28 04:32:42 localhost nova_compute[227465]: 2025-11-28 09:32:42.512 227469 DEBUG oslo_concurrency.lockutils [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 04:32:42 localhost nova_compute[227465]: 2025-11-28 09:32:42.512 227469 DEBUG oslo_concurrency.lockutils [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 04:32:42 localhost nova_compute[227465]: 2025-11-28 09:32:42.513 227469 DEBUG oslo_concurrency.lockutils [None req-10c42a77-220c-4c24-9ffc-bdb7a8461c78 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 04:32:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25476 DF PROTO=TCP SPT=59640 DPT=9105 SEQ=2968183367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD238FA0000000001030307) Nov 28 04:32:42 localhost journal[227736]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Nov 28 04:32:42 localhost journal[227736]: hostname: np0005538515.localdomain Nov 28 04:32:42 localhost journal[227736]: End of file while reading data: Input/output error Nov 28 04:32:42 localhost systemd[1]: libpod-1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e.scope: Deactivated successfully. Nov 28 04:32:42 localhost systemd[1]: libpod-1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e.scope: Consumed 3.832s CPU time. Nov 28 04:32:42 localhost podman[228440]: 2025-11-28 09:32:42.884742718 +0000 UTC m=+1.286433831 container died 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Nov 28 04:32:42 localhost systemd[1]: tmp-crun.qDM6aX.mount: Deactivated successfully. Nov 28 04:32:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e-userdata-shm.mount: Deactivated successfully. Nov 28 04:32:42 localhost systemd[1]: var-lib-containers-storage-overlay-7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942-merged.mount: Deactivated successfully. Nov 28 04:32:42 localhost podman[228440]: 2025-11-28 09:32:42.94714612 +0000 UTC m=+1.348837263 container cleanup 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:32:42 localhost podman[228440]: nova_compute Nov 28 04:32:43 localhost podman[228480]: error opening file `/run/crun/1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e/status`: No such file or directory Nov 28 04:32:43 localhost podman[228468]: 2025-11-28 09:32:43.044649401 +0000 UTC m=+0.064516609 container cleanup 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:32:43 localhost podman[228468]: nova_compute Nov 28 04:32:43 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Nov 28 04:32:43 localhost systemd[1]: Stopped nova_compute container. Nov 28 04:32:43 localhost systemd[1]: Starting nova_compute container... Nov 28 04:32:43 localhost systemd[1]: Started libcrun container. Nov 28 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:43 localhost podman[228482]: 2025-11-28 09:32:43.193391926 +0000 UTC m=+0.109785372 container init 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm) Nov 28 04:32:43 localhost podman[228482]: 2025-11-28 09:32:43.203369413 +0000 UTC m=+0.119762869 container start 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:32:43 localhost podman[228482]: nova_compute Nov 28 04:32:43 localhost nova_compute[228497]: + sudo -E kolla_set_configs Nov 28 04:32:43 localhost systemd[1]: Started nova_compute container. Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Validating config file Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying service configuration files Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/nova/nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/nova/nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /etc/ceph Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Creating directory /etc/ceph Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/ceph Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Deleting /usr/sbin/iscsiadm Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Writing out command to execute Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:43 localhost nova_compute[228497]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 28 04:32:43 localhost nova_compute[228497]: ++ cat /run_command Nov 28 04:32:43 localhost nova_compute[228497]: + CMD=nova-compute Nov 28 04:32:43 localhost nova_compute[228497]: + ARGS= Nov 28 04:32:43 localhost nova_compute[228497]: + sudo kolla_copy_cacerts Nov 28 04:32:43 localhost nova_compute[228497]: + [[ ! -n '' ]] Nov 28 04:32:43 localhost nova_compute[228497]: + . kolla_extend_start Nov 28 04:32:43 localhost nova_compute[228497]: Running command: 'nova-compute' Nov 28 04:32:43 localhost nova_compute[228497]: + echo 'Running command: '\''nova-compute'\''' Nov 28 04:32:43 localhost nova_compute[228497]: + umask 0022 Nov 28 04:32:43 localhost nova_compute[228497]: + exec nova-compute Nov 28 04:32:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59858 DF PROTO=TCP SPT=36440 DPT=9101 SEQ=279068341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD23E500000000001030307) Nov 28 04:32:44 localhost nova_compute[228497]: 2025-11-28 09:32:44.929 228501 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:32:44 localhost nova_compute[228497]: 2025-11-28 09:32:44.930 228501 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:32:44 localhost nova_compute[228497]: 2025-11-28 09:32:44.930 228501 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:32:44 localhost nova_compute[228497]: 2025-11-28 09:32:44.930 228501 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.044 228501 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.065 228501 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.065 228501 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.444 228501 INFO nova.virt.driver [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.550 228501 INFO nova.compute.provider_config [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.569 228501 WARNING nova.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.569 228501 DEBUG oslo_concurrency.lockutils [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.569 228501 DEBUG oslo_concurrency.lockutils [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.570 228501 DEBUG oslo_concurrency.lockutils [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.570 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.570 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.571 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.571 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.571 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.571 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.571 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.572 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.572 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.572 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.572 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.572 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.572 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.573 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.573 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.573 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.573 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.573 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.573 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.574 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] console_host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.574 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.574 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.574 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.574 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.575 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.575 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.575 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.575 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.575 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.576 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.576 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.576 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.576 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.576 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.577 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.577 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.577 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.577 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.577 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.578 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.578 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.578 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.578 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.578 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.579 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.579 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.579 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.579 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.579 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.580 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.580 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.580 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.580 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.580 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.581 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.581 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.581 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.581 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.581 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.581 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.582 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.582 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.582 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.582 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.582 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.583 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.583 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.583 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.583 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.583 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.583 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.584 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.584 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.584 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.584 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.584 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.585 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.585 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.585 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.585 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.585 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.585 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.586 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.586 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.586 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.586 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.586 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.587 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.587 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.587 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.587 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.587 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.588 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.588 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.588 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.588 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.588 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.588 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.589 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.589 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.589 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.589 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.589 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.590 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.590 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.590 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.590 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.590 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.590 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.591 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.591 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.591 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.591 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.591 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.592 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.592 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.592 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.592 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.592 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.592 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.593 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.593 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.593 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.593 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.593 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.594 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.594 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.594 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.594 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.594 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.594 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.595 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.595 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.595 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.595 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.595 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.596 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.596 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.596 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.596 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.596 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.597 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.597 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.597 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.597 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.597 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.597 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.598 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.598 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.598 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.598 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.598 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.599 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.600 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.601 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.602 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.603 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.604 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.605 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.606 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.607 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.608 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.609 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.609 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.609 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.609 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.609 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.609 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.610 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.611 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.611 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.611 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.611 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.611 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.611 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.612 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.612 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.612 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.612 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.612 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.612 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.613 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.613 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.613 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.613 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.613 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.613 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.614 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.614 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.614 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.614 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.614 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.614 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.615 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.616 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.617 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.618 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.619 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.619 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.619 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.619 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.619 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.619 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.620 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.620 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.620 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.620 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.620 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.620 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.621 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.621 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.621 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.621 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.622 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.622 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.622 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.622 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.622 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.623 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.623 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.623 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.623 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.623 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.623 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.624 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.624 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.624 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.624 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.624 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.624 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.625 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.625 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.625 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.625 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.625 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.625 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.626 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.626 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.626 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.626 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.626 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.626 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.627 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.627 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.627 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.627 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.627 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.628 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.628 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.628 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.628 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.628 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.628 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.629 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.629 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.629 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.629 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.629 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.630 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.630 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.630 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.630 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.630 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.630 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.631 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.631 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.631 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.631 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.631 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.631 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.632 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.632 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.632 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.632 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.632 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.633 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.633 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.633 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.633 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.633 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.633 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.634 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.634 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.634 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.634 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.634 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.634 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.635 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.635 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.635 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.635 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.635 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.635 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.636 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.637 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.638 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.639 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.640 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.641 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.642 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.643 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.644 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.644 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.644 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.644 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.644 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.644 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.645 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.645 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.645 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.645 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.645 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.645 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.646 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.647 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.648 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.649 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.650 228501 WARNING oslo_config.cfg [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Nov 28 04:32:45 localhost nova_compute[228497]: live_migration_uri is deprecated for removal in favor of two other options that Nov 28 04:32:45 localhost nova_compute[228497]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Nov 28 04:32:45 localhost nova_compute[228497]: and ``live_migration_inbound_addr`` respectively. Nov 28 04:32:45 localhost nova_compute[228497]: ). Its value may be silently ignored in the future.#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.650 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.650 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.650 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.650 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.650 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.651 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.652 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rbd_secret_uuid = 2c5417c9-00eb-57d5-a565-ddecbc7995c1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.653 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.654 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.654 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.654 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.654 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.654 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.654 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.655 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.656 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.657 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.658 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.659 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.660 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.661 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.662 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.663 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.664 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.665 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.666 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.667 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.668 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.669 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.669 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.669 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.669 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.669 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.669 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.670 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.671 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.672 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.673 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.674 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.675 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.675 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.675 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.675 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.675 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.675 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.676 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.677 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.678 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.678 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.678 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.678 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.678 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.678 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.679 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.680 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.681 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.682 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.683 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.684 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.685 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.685 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.685 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.685 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.685 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.685 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.686 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.687 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.688 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.689 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.690 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.691 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.692 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.693 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.694 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.695 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.696 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.697 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.698 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.699 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.700 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.701 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.702 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.703 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.704 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.705 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.706 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.707 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.708 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.708 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.708 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.708 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.708 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.708 228501 DEBUG oslo_service.service [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.710 228501 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.726 228501 INFO nova.virt.node [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Determined node identity 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from /var/lib/nova/compute_id#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.727 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.727 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.728 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.728 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.737 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.739 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.740 228501 INFO nova.virt.libvirt.driver [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Connection event '1' reason 'None'#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.744 228501 INFO nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Libvirt host capabilities Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 4c358f0e-7e15-44e5-bde2-714780d05a92 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: x86_64 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v4 Nov 28 04:32:45 localhost nova_compute[228497]: AMD Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tcp Nov 28 04:32:45 localhost nova_compute[228497]: rdma Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 16116612 Nov 28 04:32:45 localhost nova_compute[228497]: 4029153 Nov 28 04:32:45 localhost nova_compute[228497]: 0 Nov 28 04:32:45 localhost nova_compute[228497]: 0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: selinux Nov 28 04:32:45 localhost nova_compute[228497]: 0 Nov 28 04:32:45 localhost nova_compute[228497]: system_u:system_r:svirt_t:s0 Nov 28 04:32:45 localhost nova_compute[228497]: system_u:system_r:svirt_tcg_t:s0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: dac Nov 28 04:32:45 localhost nova_compute[228497]: 0 Nov 28 04:32:45 localhost nova_compute[228497]: +107:+107 Nov 28 04:32:45 localhost nova_compute[228497]: +107:+107 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: hvm Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 32 Nov 28 04:32:45 localhost nova_compute[228497]: /usr/libexec/qemu-kvm Nov 28 04:32:45 localhost nova_compute[228497]: pc-i440fx-rhel7.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.8.0 Nov 28 04:32:45 localhost nova_compute[228497]: q35 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.4.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.5.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.3.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel7.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.4.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.2.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.2.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.0.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.0.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.1.0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: hvm Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 64 Nov 28 04:32:45 localhost nova_compute[228497]: /usr/libexec/qemu-kvm Nov 28 04:32:45 localhost nova_compute[228497]: pc-i440fx-rhel7.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.8.0 Nov 28 04:32:45 localhost nova_compute[228497]: q35 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.4.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.5.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.3.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel7.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.4.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.2.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.2.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.0.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.0.0 Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel8.1.0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: #033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.749 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.752 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/libexec/qemu-kvm Nov 28 04:32:45 localhost nova_compute[228497]: kvm Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.8.0 Nov 28 04:32:45 localhost nova_compute[228497]: i686 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: rom Nov 28 04:32:45 localhost nova_compute[228497]: pflash Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: yes Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: AMD Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 486 Nov 28 04:32:45 localhost nova_compute[228497]: 486-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Conroe Nov 28 04:32:45 localhost nova_compute[228497]: Conroe-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-IBPB Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v4 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v1 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v2 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v6 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v7 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Penryn Nov 28 04:32:45 localhost nova_compute[228497]: Penryn-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Westmere Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v2 Nov 28 04:32:45 localhost nova_compute[228497]: athlon Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: athlon-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: kvm32 Nov 28 04:32:45 localhost nova_compute[228497]: kvm32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: n270 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: n270-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pentium Nov 28 04:32:45 localhost nova_compute[228497]: pentium-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: phenom Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: phenom-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu32 Nov 28 04:32:45 localhost nova_compute[228497]: qemu32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: anonymous Nov 28 04:32:45 localhost nova_compute[228497]: memfd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: disk Nov 28 04:32:45 localhost nova_compute[228497]: cdrom Nov 28 04:32:45 localhost nova_compute[228497]: floppy Nov 28 04:32:45 localhost nova_compute[228497]: lun Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: fdc Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: sata Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: vnc Nov 28 04:32:45 localhost nova_compute[228497]: egl-headless Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: subsystem Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: mandatory Nov 28 04:32:45 localhost nova_compute[228497]: requisite Nov 28 04:32:45 localhost nova_compute[228497]: optional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: pci Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: random Nov 28 04:32:45 localhost nova_compute[228497]: egd Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: path Nov 28 04:32:45 localhost nova_compute[228497]: handle Nov 28 04:32:45 localhost nova_compute[228497]: virtiofs Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tpm-tis Nov 28 04:32:45 localhost nova_compute[228497]: tpm-crb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: emulator Nov 28 04:32:45 localhost nova_compute[228497]: external Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 2.0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: passt Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: isa Nov 28 04:32:45 localhost nova_compute[228497]: hyperv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: null Nov 28 04:32:45 localhost nova_compute[228497]: vc Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: dev Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: pipe Nov 28 04:32:45 localhost nova_compute[228497]: stdio Nov 28 04:32:45 localhost nova_compute[228497]: udp Nov 28 04:32:45 localhost nova_compute[228497]: tcp Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: qemu-vdagent Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: relaxed Nov 28 04:32:45 localhost nova_compute[228497]: vapic Nov 28 04:32:45 localhost nova_compute[228497]: spinlocks Nov 28 04:32:45 localhost nova_compute[228497]: vpindex Nov 28 04:32:45 localhost nova_compute[228497]: runtime Nov 28 04:32:45 localhost nova_compute[228497]: synic Nov 28 04:32:45 localhost nova_compute[228497]: stimer Nov 28 04:32:45 localhost nova_compute[228497]: reset Nov 28 04:32:45 localhost nova_compute[228497]: vendor_id Nov 28 04:32:45 localhost nova_compute[228497]: frequencies Nov 28 04:32:45 localhost nova_compute[228497]: reenlightenment Nov 28 04:32:45 localhost nova_compute[228497]: tlbflush Nov 28 04:32:45 localhost nova_compute[228497]: ipi Nov 28 04:32:45 localhost nova_compute[228497]: avic Nov 28 04:32:45 localhost nova_compute[228497]: emsr_bitmap Nov 28 04:32:45 localhost nova_compute[228497]: xmm_input Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 4095 Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Linux KVM Hv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tdx Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.755 228501 DEBUG nova.virt.libvirt.volume.mount [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.759 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/libexec/qemu-kvm Nov 28 04:32:45 localhost nova_compute[228497]: kvm Nov 28 04:32:45 localhost nova_compute[228497]: pc-i440fx-rhel7.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: i686 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: rom Nov 28 04:32:45 localhost nova_compute[228497]: pflash Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: yes Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: AMD Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 486 Nov 28 04:32:45 localhost nova_compute[228497]: 486-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Conroe Nov 28 04:32:45 localhost nova_compute[228497]: Conroe-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-IBPB Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v4 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v1 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v2 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v6 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v7 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Penryn Nov 28 04:32:45 localhost nova_compute[228497]: Penryn-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Westmere Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v2 Nov 28 04:32:45 localhost nova_compute[228497]: athlon Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: athlon-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: kvm32 Nov 28 04:32:45 localhost nova_compute[228497]: kvm32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: n270 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: n270-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pentium Nov 28 04:32:45 localhost nova_compute[228497]: pentium-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: phenom Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: phenom-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu32 Nov 28 04:32:45 localhost nova_compute[228497]: qemu32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: anonymous Nov 28 04:32:45 localhost nova_compute[228497]: memfd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: disk Nov 28 04:32:45 localhost nova_compute[228497]: cdrom Nov 28 04:32:45 localhost nova_compute[228497]: floppy Nov 28 04:32:45 localhost nova_compute[228497]: lun Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: ide Nov 28 04:32:45 localhost nova_compute[228497]: fdc Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: sata Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: vnc Nov 28 04:32:45 localhost nova_compute[228497]: egl-headless Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: subsystem Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: mandatory Nov 28 04:32:45 localhost nova_compute[228497]: requisite Nov 28 04:32:45 localhost nova_compute[228497]: optional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: pci Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: random Nov 28 04:32:45 localhost nova_compute[228497]: egd Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: path Nov 28 04:32:45 localhost nova_compute[228497]: handle Nov 28 04:32:45 localhost nova_compute[228497]: virtiofs Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tpm-tis Nov 28 04:32:45 localhost nova_compute[228497]: tpm-crb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: emulator Nov 28 04:32:45 localhost nova_compute[228497]: external Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 2.0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: passt Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: isa Nov 28 04:32:45 localhost nova_compute[228497]: hyperv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: null Nov 28 04:32:45 localhost nova_compute[228497]: vc Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: dev Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: pipe Nov 28 04:32:45 localhost nova_compute[228497]: stdio Nov 28 04:32:45 localhost nova_compute[228497]: udp Nov 28 04:32:45 localhost nova_compute[228497]: tcp Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: qemu-vdagent Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: relaxed Nov 28 04:32:45 localhost nova_compute[228497]: vapic Nov 28 04:32:45 localhost nova_compute[228497]: spinlocks Nov 28 04:32:45 localhost nova_compute[228497]: vpindex Nov 28 04:32:45 localhost nova_compute[228497]: runtime Nov 28 04:32:45 localhost nova_compute[228497]: synic Nov 28 04:32:45 localhost nova_compute[228497]: stimer Nov 28 04:32:45 localhost nova_compute[228497]: reset Nov 28 04:32:45 localhost nova_compute[228497]: vendor_id Nov 28 04:32:45 localhost nova_compute[228497]: frequencies Nov 28 04:32:45 localhost nova_compute[228497]: reenlightenment Nov 28 04:32:45 localhost nova_compute[228497]: tlbflush Nov 28 04:32:45 localhost nova_compute[228497]: ipi Nov 28 04:32:45 localhost nova_compute[228497]: avic Nov 28 04:32:45 localhost nova_compute[228497]: emsr_bitmap Nov 28 04:32:45 localhost nova_compute[228497]: xmm_input Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 4095 Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Linux KVM Hv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tdx Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.781 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.786 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/libexec/qemu-kvm Nov 28 04:32:45 localhost nova_compute[228497]: kvm Nov 28 04:32:45 localhost nova_compute[228497]: pc-q35-rhel9.8.0 Nov 28 04:32:45 localhost nova_compute[228497]: x86_64 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: efi Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/edk2/ovmf/OVMF_CODE.fd Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: rom Nov 28 04:32:45 localhost nova_compute[228497]: pflash Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: yes Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: yes Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: AMD Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 486 Nov 28 04:32:45 localhost nova_compute[228497]: 486-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Conroe Nov 28 04:32:45 localhost nova_compute[228497]: Conroe-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-IBPB Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v4 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v1 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v2 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v6 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v7 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Penryn Nov 28 04:32:45 localhost nova_compute[228497]: Penryn-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Westmere Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v2 Nov 28 04:32:45 localhost nova_compute[228497]: athlon Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: athlon-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: kvm32 Nov 28 04:32:45 localhost nova_compute[228497]: kvm32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: n270 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: n270-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pentium Nov 28 04:32:45 localhost nova_compute[228497]: pentium-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: phenom Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: phenom-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu32 Nov 28 04:32:45 localhost nova_compute[228497]: qemu32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: anonymous Nov 28 04:32:45 localhost nova_compute[228497]: memfd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: disk Nov 28 04:32:45 localhost nova_compute[228497]: cdrom Nov 28 04:32:45 localhost nova_compute[228497]: floppy Nov 28 04:32:45 localhost nova_compute[228497]: lun Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: fdc Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: sata Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: vnc Nov 28 04:32:45 localhost nova_compute[228497]: egl-headless Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: subsystem Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: mandatory Nov 28 04:32:45 localhost nova_compute[228497]: requisite Nov 28 04:32:45 localhost nova_compute[228497]: optional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: pci Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: random Nov 28 04:32:45 localhost nova_compute[228497]: egd Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: path Nov 28 04:32:45 localhost nova_compute[228497]: handle Nov 28 04:32:45 localhost nova_compute[228497]: virtiofs Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tpm-tis Nov 28 04:32:45 localhost nova_compute[228497]: tpm-crb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: emulator Nov 28 04:32:45 localhost nova_compute[228497]: external Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 2.0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: passt Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: isa Nov 28 04:32:45 localhost nova_compute[228497]: hyperv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: null Nov 28 04:32:45 localhost nova_compute[228497]: vc Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: dev Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: pipe Nov 28 04:32:45 localhost nova_compute[228497]: stdio Nov 28 04:32:45 localhost nova_compute[228497]: udp Nov 28 04:32:45 localhost nova_compute[228497]: tcp Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: qemu-vdagent Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: relaxed Nov 28 04:32:45 localhost nova_compute[228497]: vapic Nov 28 04:32:45 localhost nova_compute[228497]: spinlocks Nov 28 04:32:45 localhost nova_compute[228497]: vpindex Nov 28 04:32:45 localhost nova_compute[228497]: runtime Nov 28 04:32:45 localhost nova_compute[228497]: synic Nov 28 04:32:45 localhost nova_compute[228497]: stimer Nov 28 04:32:45 localhost nova_compute[228497]: reset Nov 28 04:32:45 localhost nova_compute[228497]: vendor_id Nov 28 04:32:45 localhost nova_compute[228497]: frequencies Nov 28 04:32:45 localhost nova_compute[228497]: reenlightenment Nov 28 04:32:45 localhost nova_compute[228497]: tlbflush Nov 28 04:32:45 localhost nova_compute[228497]: ipi Nov 28 04:32:45 localhost nova_compute[228497]: avic Nov 28 04:32:45 localhost nova_compute[228497]: emsr_bitmap Nov 28 04:32:45 localhost nova_compute[228497]: xmm_input Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 4095 Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Linux KVM Hv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tdx Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.837 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/libexec/qemu-kvm Nov 28 04:32:45 localhost nova_compute[228497]: kvm Nov 28 04:32:45 localhost nova_compute[228497]: pc-i440fx-rhel7.6.0 Nov 28 04:32:45 localhost nova_compute[228497]: x86_64 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: rom Nov 28 04:32:45 localhost nova_compute[228497]: pflash Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: yes Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: no Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: AMD Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 486 Nov 28 04:32:45 localhost nova_compute[228497]: 486-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Broadwell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cascadelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Conroe Nov 28 04:32:45 localhost nova_compute[228497]: Conroe-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Cooperlake-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Denverton-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Dhyana-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Genoa-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-IBPB Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Milan-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-Rome-v4 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v1 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v2 Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: EPYC-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: GraniteRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Haswell-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-noTSX Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v6 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Icelake-Server-v7 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: IvyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: KnightsMill-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nehalem-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G1-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G4-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Opteron_G5-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Penryn Nov 28 04:32:45 localhost nova_compute[228497]: Penryn-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: SandyBridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SapphireRapids-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: SierraForest-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Client-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-noTSX-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Skylake-Server-v5 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v2 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v3 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Snowridge-v4 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Westmere Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-IBRS Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Westmere-v2 Nov 28 04:32:45 localhost nova_compute[228497]: athlon Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: athlon-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: core2duo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: coreduo-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: kvm32 Nov 28 04:32:45 localhost nova_compute[228497]: kvm32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64 Nov 28 04:32:45 localhost nova_compute[228497]: kvm64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: n270 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: n270-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pentium Nov 28 04:32:45 localhost nova_compute[228497]: pentium-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2 Nov 28 04:32:45 localhost nova_compute[228497]: pentium2-v1 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3 Nov 28 04:32:45 localhost nova_compute[228497]: pentium3-v1 Nov 28 04:32:45 localhost nova_compute[228497]: phenom Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: phenom-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu32 Nov 28 04:32:45 localhost nova_compute[228497]: qemu32-v1 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64 Nov 28 04:32:45 localhost nova_compute[228497]: qemu64-v1 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: anonymous Nov 28 04:32:45 localhost nova_compute[228497]: memfd Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: disk Nov 28 04:32:45 localhost nova_compute[228497]: cdrom Nov 28 04:32:45 localhost nova_compute[228497]: floppy Nov 28 04:32:45 localhost nova_compute[228497]: lun Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: ide Nov 28 04:32:45 localhost nova_compute[228497]: fdc Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: sata Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: vnc Nov 28 04:32:45 localhost nova_compute[228497]: egl-headless Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: subsystem Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: mandatory Nov 28 04:32:45 localhost nova_compute[228497]: requisite Nov 28 04:32:45 localhost nova_compute[228497]: optional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: pci Nov 28 04:32:45 localhost nova_compute[228497]: scsi Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: virtio Nov 28 04:32:45 localhost nova_compute[228497]: virtio-transitional Nov 28 04:32:45 localhost nova_compute[228497]: virtio-non-transitional Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: random Nov 28 04:32:45 localhost nova_compute[228497]: egd Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: path Nov 28 04:32:45 localhost nova_compute[228497]: handle Nov 28 04:32:45 localhost nova_compute[228497]: virtiofs Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tpm-tis Nov 28 04:32:45 localhost nova_compute[228497]: tpm-crb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: emulator Nov 28 04:32:45 localhost nova_compute[228497]: external Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 2.0 Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: usb Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: qemu Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: builtin Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: default Nov 28 04:32:45 localhost nova_compute[228497]: passt Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: isa Nov 28 04:32:45 localhost nova_compute[228497]: hyperv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: null Nov 28 04:32:45 localhost nova_compute[228497]: vc Nov 28 04:32:45 localhost nova_compute[228497]: pty Nov 28 04:32:45 localhost nova_compute[228497]: dev Nov 28 04:32:45 localhost nova_compute[228497]: file Nov 28 04:32:45 localhost nova_compute[228497]: pipe Nov 28 04:32:45 localhost nova_compute[228497]: stdio Nov 28 04:32:45 localhost nova_compute[228497]: udp Nov 28 04:32:45 localhost nova_compute[228497]: tcp Nov 28 04:32:45 localhost nova_compute[228497]: unix Nov 28 04:32:45 localhost nova_compute[228497]: qemu-vdagent Nov 28 04:32:45 localhost nova_compute[228497]: dbus Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: relaxed Nov 28 04:32:45 localhost nova_compute[228497]: vapic Nov 28 04:32:45 localhost nova_compute[228497]: spinlocks Nov 28 04:32:45 localhost nova_compute[228497]: vpindex Nov 28 04:32:45 localhost nova_compute[228497]: runtime Nov 28 04:32:45 localhost nova_compute[228497]: synic Nov 28 04:32:45 localhost nova_compute[228497]: stimer Nov 28 04:32:45 localhost nova_compute[228497]: reset Nov 28 04:32:45 localhost nova_compute[228497]: vendor_id Nov 28 04:32:45 localhost nova_compute[228497]: frequencies Nov 28 04:32:45 localhost nova_compute[228497]: reenlightenment Nov 28 04:32:45 localhost nova_compute[228497]: tlbflush Nov 28 04:32:45 localhost nova_compute[228497]: ipi Nov 28 04:32:45 localhost nova_compute[228497]: avic Nov 28 04:32:45 localhost nova_compute[228497]: emsr_bitmap Nov 28 04:32:45 localhost nova_compute[228497]: xmm_input Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: 4095 Nov 28 04:32:45 localhost nova_compute[228497]: on Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: off Nov 28 04:32:45 localhost nova_compute[228497]: Linux KVM Hv Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: tdx Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: Nov 28 04:32:45 localhost nova_compute[228497]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.886 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.886 228501 INFO nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Secure Boot support detected#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.889 228501 INFO nova.virt.libvirt.driver [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.889 228501 INFO nova.virt.libvirt.driver [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.898 228501 DEBUG nova.virt.libvirt.driver [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.915 228501 INFO nova.virt.node [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Determined node identity 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from /var/lib/nova/compute_id#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.931 228501 DEBUG nova.compute.manager [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Verified node 72fba1ca-0d86-48af-8a3d-510284dfd0e0 matches my host np0005538515.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Nov 28 04:32:45 localhost nova_compute[228497]: 2025-11-28 09:32:45.955 228501 INFO nova.compute.manager [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.023 228501 DEBUG oslo_concurrency.lockutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.024 228501 DEBUG oslo_concurrency.lockutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.024 228501 DEBUG oslo_concurrency.lockutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.024 228501 DEBUG nova.compute.resource_tracker [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.024 228501 DEBUG oslo_concurrency.processutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.428 228501 DEBUG oslo_concurrency.processutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:32:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6001 DF PROTO=TCP SPT=42800 DPT=9100 SEQ=1532807494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2477A0000000001030307) Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.690 228501 WARNING nova.virt.libvirt.driver [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.691 228501 DEBUG nova.compute.resource_tracker [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13611MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.691 228501 DEBUG oslo_concurrency.lockutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.692 228501 DEBUG oslo_concurrency.lockutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.805 228501 DEBUG nova.compute.resource_tracker [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.805 228501 DEBUG nova.compute.resource_tracker [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.828 228501 DEBUG nova.scheduler.client.report [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:32:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.901 228501 DEBUG nova.scheduler.client.report [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.901 228501 DEBUG nova.compute.provider_tree [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.925 228501 DEBUG nova.scheduler.client.report [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.952 228501 DEBUG nova.scheduler.client.report [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:32:46 localhost nova_compute[228497]: 2025-11-28 09:32:46.975 228501 DEBUG oslo_concurrency.processutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:32:46 localhost podman[228574]: 2025-11-28 09:32:46.99841 +0000 UTC m=+0.099269681 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 04:32:47 localhost podman[228574]: 2025-11-28 09:32:47.066636197 +0000 UTC m=+0.167495898 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:32:47 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.455 228501 DEBUG oslo_concurrency.processutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.462 228501 DEBUG nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Nov 28 04:32:47 localhost nova_compute[228497]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.463 228501 INFO nova.virt.libvirt.host [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] kernel doesn't support AMD SEV#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.464 228501 DEBUG nova.compute.provider_tree [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.464 228501 DEBUG nova.virt.libvirt.driver [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.493 228501 DEBUG nova.scheduler.client.report [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.515 228501 DEBUG nova.compute.resource_tracker [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.515 228501 DEBUG oslo_concurrency.lockutils [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.515 228501 DEBUG nova.service [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.544 228501 DEBUG nova.service [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Nov 28 04:32:47 localhost nova_compute[228497]: 2025-11-28 09:32:47.544 228501 DEBUG nova.servicegroup.drivers.db [None req-413b9c7d-c524-48d4-b5b8-346a93ed4d5d - - - - - -] DB_Driver: join new ServiceGroup member np0005538515.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Nov 28 04:32:49 localhost python3.9[228774]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 28 04:32:49 localhost systemd[1]: Started libpod-conmon-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d.scope. Nov 28 04:32:49 localhost systemd[1]: Started libcrun container. Nov 28 04:32:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Nov 28 04:32:49 localhost podman[228798]: 2025-11-28 09:32:49.431744221 +0000 UTC m=+0.142565399 container init acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 04:32:49 localhost podman[228798]: 2025-11-28 09:32:49.442509199 +0000 UTC m=+0.153330377 container start acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:32:49 localhost python3.9[228774]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Applying nova statedir ownership Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/469bc4441baff9216df986857f9ff45dbf25965a8d2f755a6449ac2645cb7191 Nov 28 04:32:49 localhost nova_compute_init[228837]: INFO:nova_statedir:Nova statedir ownership complete Nov 28 04:32:49 localhost systemd[1]: libpod-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d.scope: Deactivated successfully. Nov 28 04:32:49 localhost podman[228838]: 2025-11-28 09:32:49.51496645 +0000 UTC m=+0.052755144 container died acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible) Nov 28 04:32:49 localhost podman[228850]: 2025-11-28 09:32:49.597762178 +0000 UTC m=+0.076205002 container cleanup acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:32:49 localhost systemd[1]: libpod-conmon-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d.scope: Deactivated successfully. Nov 28 04:32:50 localhost systemd[1]: session-53.scope: Deactivated successfully. Nov 28 04:32:50 localhost systemd[1]: session-53.scope: Consumed 2min 14.190s CPU time. Nov 28 04:32:50 localhost systemd-logind[763]: Session 53 logged out. Waiting for processes to exit. Nov 28 04:32:50 localhost systemd-logind[763]: Removed session 53. Nov 28 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay-8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79-merged.mount: Deactivated successfully. Nov 28 04:32:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d-userdata-shm.mount: Deactivated successfully. Nov 28 04:32:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6002 DF PROTO=TCP SPT=42800 DPT=9100 SEQ=1532807494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2573B0000000001030307) Nov 28 04:32:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:32:50.812 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:32:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:32:50.813 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:32:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:32:50.813 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:32:56 localhost sshd[228894]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:32:56 localhost systemd-logind[763]: New session 55 of user zuul. Nov 28 04:32:56 localhost systemd[1]: Started Session 55 of User zuul. Nov 28 04:32:57 localhost python3.9[229005]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:32:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=585 DF PROTO=TCP SPT=50378 DPT=9105 SEQ=3544091224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD272830000000001030307) Nov 28 04:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:32:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:32:57 localhost systemd[1]: tmp-crun.gIgpU9.mount: Deactivated successfully. Nov 28 04:32:57 localhost podman[229026]: 2025-11-28 09:32:57.970844146 +0000 UTC m=+0.077783384 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 04:32:58 localhost podman[229028]: 2025-11-28 09:32:58.029723474 +0000 UTC m=+0.133729334 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:32:58 localhost podman[229026]: 2025-11-28 09:32:58.061094274 +0000 UTC m=+0.168033522 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 04:32:58 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:32:58 localhost podman[229028]: 2025-11-28 09:32:58.115312976 +0000 UTC m=+0.219318796 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Nov 28 04:32:58 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:32:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=586 DF PROTO=TCP SPT=50378 DPT=9105 SEQ=3544091224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2767B0000000001030307) Nov 28 04:32:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6003 DF PROTO=TCP SPT=42800 DPT=9100 SEQ=1532807494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD276FA0000000001030307) Nov 28 04:32:59 localhost python3.9[229160]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:32:59 localhost systemd[1]: Reloading. Nov 28 04:32:59 localhost systemd-rc-local-generator[229185]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:32:59 localhost systemd-sysv-generator[229190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:32:59 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:00 localhost python3.9[229303]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:33:00 localhost network[229320]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:33:00 localhost network[229321]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:33:00 localhost network[229322]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:33:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:33:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3407 DF PROTO=TCP SPT=33760 DPT=9105 SEQ=3839620824 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD282FA0000000001030307) Nov 28 04:33:04 localhost nova_compute[228497]: 2025-11-28 09:33:04.546 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:04 localhost nova_compute[228497]: 2025-11-28 09:33:04.582 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=588 DF PROTO=TCP SPT=50378 DPT=9105 SEQ=3544091224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD28E3A0000000001030307) Nov 28 04:33:07 localhost python3.9[229557]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:33:08 localhost python3.9[229668]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:08 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation. Nov 28 04:33:08 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:33:08 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:33:08 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:33:09 localhost python3.9[229779]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54533 DF PROTO=TCP SPT=34982 DPT=9102 SEQ=4045366205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD29FBA0000000001030307) Nov 28 04:33:10 localhost python3.9[229889]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:33:10 localhost python3.9[229999]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:33:12 localhost python3.9[230109]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:33:12 localhost systemd[1]: Reloading. Nov 28 04:33:12 localhost systemd-sysv-generator[230141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:33:12 localhost systemd-rc-local-generator[230138]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=589 DF PROTO=TCP SPT=50378 DPT=9105 SEQ=3544091224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2AEFA0000000001030307) Nov 28 04:33:13 localhost python3.9[230256]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:33:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43220 DF PROTO=TCP SPT=54008 DPT=9882 SEQ=4224566225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2B2FA0000000001030307) Nov 28 04:33:15 localhost python3.9[230367]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:33:15 localhost python3.9[230475]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:33:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57904 DF PROTO=TCP SPT=54348 DPT=9100 SEQ=3096449806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2BCBA0000000001030307) Nov 28 04:33:16 localhost python3.9[230585]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:17 localhost python3.9[230671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322396.2710052-362-63714341438964/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=2879a2396d2687409963cab2311faed024e34763 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:33:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:33:18 localhost podman[230710]: 2025-11-28 09:33:18.011017949 +0000 UTC m=+0.106546351 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 04:33:18 localhost podman[230710]: 2025-11-28 09:33:18.050223369 +0000 UTC m=+0.145751721 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:33:18 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:33:18 localhost python3.9[230800]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None Nov 28 04:33:19 localhost python3.9[230910]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None Nov 28 04:33:20 localhost python3.9[231021]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None Nov 28 04:33:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57905 DF PROTO=TCP SPT=54348 DPT=9100 SEQ=3096449806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2CC7B0000000001030307) Nov 28 04:33:21 localhost python3.9[231137]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005538515.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None Nov 28 04:33:22 localhost python3.9[231253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:23 localhost python3.9[231339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764322402.1703568-566-98328576520128/.source.conf _original_basename=ceilometer.conf follow=False checksum=e4f5a0d8a335534158f72dc0bd2ff76fd1e29e2d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:23 localhost python3.9[231447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:24 localhost python3.9[231533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764322403.2735925-566-53101525248345/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:25 localhost python3.9[231641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:25 localhost python3.9[231727]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764322404.845922-566-53873045232368/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:27 localhost python3.9[231835]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:33:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54147 DF PROTO=TCP SPT=56796 DPT=9105 SEQ=2235505020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2E7B20000000001030307) Nov 28 04:33:27 localhost python3.9[231943]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:33:28 localhost python3.9[232051]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54148 DF PROTO=TCP SPT=56796 DPT=9105 SEQ=2235505020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2EBBB0000000001030307) Nov 28 04:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:33:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:33:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16878 DF PROTO=TCP SPT=49914 DPT=9882 SEQ=202554163 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2ECDC0000000001030307) Nov 28 04:33:28 localhost python3.9[232137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322407.9801822-743-64086649190568/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:28 localhost podman[232139]: 2025-11-28 09:33:28.956425337 +0000 UTC m=+0.063851114 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent) Nov 28 04:33:28 localhost podman[232139]: 2025-11-28 09:33:28.96174248 +0000 UTC m=+0.069168247 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:33:28 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:33:29 localhost systemd[1]: tmp-crun.fGAcs8.mount: Deactivated successfully. Nov 28 04:33:29 localhost podman[232138]: 2025-11-28 09:33:29.022596173 +0000 UTC m=+0.128663859 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 04:33:29 localhost podman[232138]: 2025-11-28 09:33:29.080757343 +0000 UTC m=+0.186825029 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller) Nov 28 04:33:29 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:33:29 localhost python3.9[232287]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:29 localhost python3.9[232342]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:30 localhost python3.9[232450]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:31 localhost python3.9[232536]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322410.0726619-743-105798245900071/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:31 localhost python3.9[232644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16880 DF PROTO=TCP SPT=49914 DPT=9882 SEQ=202554163 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD2F8FA0000000001030307) Nov 28 04:33:32 localhost python3.9[232730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322411.1797762-743-60880135725985/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:32 localhost python3.9[232838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:33 localhost python3.9[232924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322412.3166602-743-82576824235030/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:33 localhost python3.9[233032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:34 localhost python3.9[233118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322413.4155815-743-117337531990705/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54150 DF PROTO=TCP SPT=56796 DPT=9105 SEQ=2235505020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3037A0000000001030307) Nov 28 04:33:35 localhost python3.9[233226]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:35 localhost python3.9[233312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322414.5669847-743-124438703405987/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:36 localhost python3.9[233420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:36 localhost python3.9[233506]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322415.751184-743-64197244377666/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:38 localhost python3.9[233614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:38 localhost python3.9[233700]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322416.9033146-743-17101585493052/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31912 DF PROTO=TCP SPT=50028 DPT=9102 SEQ=3314951117 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD314FA0000000001030307) Nov 28 04:33:39 localhost python3.9[233808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:40 localhost python3.9[233894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322418.8352146-743-247817568612191/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:41 localhost python3.9[234002]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:41 localhost python3.9[234088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322420.8453689-743-248790475964933/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54151 DF PROTO=TCP SPT=56796 DPT=9105 SEQ=2235505020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD322FB0000000001030307) Nov 28 04:33:42 localhost python3.9[234198]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:33:43 localhost python3.9[234308]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:33:43 localhost systemd[1]: Reloading. Nov 28 04:33:43 localhost systemd-rc-local-generator[234334]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:33:43 localhost systemd-sysv-generator[234338]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:44 localhost systemd[1]: Listening on Podman API Socket. Nov 28 04:33:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42018 DF PROTO=TCP SPT=53686 DPT=9101 SEQ=3661804958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD328B00000000001030307) Nov 28 04:33:44 localhost python3.9[234458]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.075 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.076 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.077 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.077 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.098 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.099 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.100 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.100 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.101 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.101 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.101 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.102 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.102 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.123 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.124 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.124 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.124 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.125 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:33:45 localhost python3.9[234548]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322424.4426816-1258-216671780328919/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.606 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.841 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.843 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13617MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.844 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.844 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.907 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.907 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:33:45 localhost nova_compute[228497]: 2025-11-28 09:33:45.926 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:33:46 localhost python3.9[234623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:33:46 localhost nova_compute[228497]: 2025-11-28 09:33:46.366 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:33:46 localhost nova_compute[228497]: 2025-11-28 09:33:46.376 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:33:46 localhost nova_compute[228497]: 2025-11-28 09:33:46.394 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:33:46 localhost nova_compute[228497]: 2025-11-28 09:33:46.397 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:33:46 localhost nova_compute[228497]: 2025-11-28 09:33:46.398 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:33:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46511 DF PROTO=TCP SPT=45814 DPT=9100 SEQ=533774416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD331FA0000000001030307) Nov 28 04:33:46 localhost python3.9[234733]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322424.4426816-1258-216671780328919/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:33:47 localhost python3.9[234843]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False Nov 28 04:33:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:33:48 localhost podman[234953]: 2025-11-28 09:33:48.581887272 +0000 UTC m=+0.090202001 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:33:48 localhost podman[234953]: 2025-11-28 09:33:48.621562366 +0000 UTC m=+0.129877065 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:33:48 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:33:48 localhost python3.9[234954]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:33:50 localhost systemd[1]: tmp-crun.yTAtw2.mount: Deactivated successfully. Nov 28 04:33:50 localhost podman[235188]: 2025-11-28 09:33:50.435277608 +0000 UTC m=+0.114838365 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, name=rhceph, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True) Nov 28 04:33:50 localhost podman[235188]: 2025-11-28 09:33:50.564574654 +0000 UTC m=+0.244135381 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, version=7, RELEASE=main) Nov 28 04:33:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46512 DF PROTO=TCP SPT=45814 DPT=9100 SEQ=533774416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD341BA0000000001030307) Nov 28 04:33:50 localhost python3[235204]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:33:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:33:50.813 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:33:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:33:50.813 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:33:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:33:50.814 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:33:50 localhost python3[235204]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "e6f07353639e492d8c9627d6d615ceeb47cb00ac4d14993b12e8023ee2aeee6f",#012 "Digest": "sha256:ba8d4a4e89620dec751cb5de5631f858557101d862972a8e817b82e4e10180a1",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:ba8d4a4e89620dec751cb5de5631f858557101d862972a8e817b82e4e10180a1"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:26:47.510377458Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 505178369,#012 "VirtualSize": 505178369,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2/diff:/var/lib/containers/storage/overlay/f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a/diff:/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:c136b33417f134a3b932677bcf7a2df089c29f20eca250129eafd2132d4708bb",#012 "sha256:df29e1f065b3ca62a976bd39a05f70336eee2ae6be8f0f1548e8c749ab2e29f2",#012 "sha256:23884b48504b714fa8c89fa23b204d39c39cc69fece546e604d8bd0566e4fb11"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 Nov 28 04:33:50 localhost podman[235306]: 2025-11-28 09:33:50.982469382 +0000 UTC m=+0.083563558 container remove d2588e20d5c6110b484bc7ffe641a1caa93791775223f93ea6db4e71c80aa6ff (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '185ba876a5902dbf87b8591344afd39d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676) Nov 28 04:33:50 localhost python3[235204]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute Nov 28 04:33:51 localhost podman[235347]: Nov 28 04:33:51 localhost podman[235347]: 2025-11-28 09:33:51.091610632 +0000 UTC m=+0.091164081 container create 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125) Nov 28 04:33:51 localhost podman[235347]: 2025-11-28 09:33:51.04874779 +0000 UTC m=+0.048301359 image pull quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified Nov 28 04:33:51 localhost python3[235204]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start Nov 28 04:33:51 localhost python3.9[235536]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:33:53 localhost python3.9[235666]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:53 localhost python3.9[235775]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322433.1180668-1450-120375643203824/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:33:54 localhost python3.9[235830]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:33:54 localhost systemd[1]: Reloading. Nov 28 04:33:54 localhost systemd-rc-local-generator[235855]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:33:54 localhost systemd-sysv-generator[235862]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:54 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost python3.9[235923]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:33:55 localhost systemd[1]: Reloading. Nov 28 04:33:55 localhost systemd-sysv-generator[235956]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:33:55 localhost systemd-rc-local-generator[235953]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:33:55 localhost systemd[1]: Starting ceilometer_agent_compute container... Nov 28 04:33:56 localhost systemd[1]: tmp-crun.n2Vm4M.mount: Deactivated successfully. Nov 28 04:33:56 localhost systemd[1]: Started libcrun container. Nov 28 04:33:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79357f94827d1342ad406bdeb4e36d95f97d18c5be3690def0d45192dec0b1fd/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Nov 28 04:33:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79357f94827d1342ad406bdeb4e36d95f97d18c5be3690def0d45192dec0b1fd/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Nov 28 04:33:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:33:56 localhost podman[235964]: 2025-11-28 09:33:56.090230083 +0000 UTC m=+0.151960811 container init 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + sudo -E kolla_set_configs Nov 28 04:33:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:33:56 localhost podman[235964]: 2025-11-28 09:33:56.121368686 +0000 UTC m=+0.183099414 container start 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 04:33:56 localhost podman[235964]: ceilometer_agent_compute Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: sudo: unable to send audit message: Operation not permitted Nov 28 04:33:56 localhost systemd[1]: Started ceilometer_agent_compute container. Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Validating config file Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Copying service configuration files Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: INFO:__main__:Writing out command to execute Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: ++ cat /run_command Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + ARGS= Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + sudo kolla_copy_cacerts Nov 28 04:33:56 localhost podman[235985]: 2025-11-28 09:33:56.218264571 +0000 UTC m=+0.092989306 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: sudo: unable to send audit message: Operation not permitted Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + [[ ! -n '' ]] Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + . kolla_extend_start Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + umask 0022 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 28 04:33:56 localhost podman[235985]: 2025-11-28 09:33:56.25154275 +0000 UTC m=+0.126267465 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 04:33:56 localhost podman[235985]: unhealthy Nov 28 04:33:56 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:33:56 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.906 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.907 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.908 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.909 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.910 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.911 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.912 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.913 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.914 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.915 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.916 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.917 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.918 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.919 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.920 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.937 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.938 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Nov 28 04:33:56 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:56.939 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.015 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.076 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.076 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.076 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.076 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.077 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.078 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.079 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.080 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.081 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.082 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.083 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.084 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.085 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.086 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.087 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.088 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.089 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.090 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.091 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.092 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.093 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.094 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.094 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.094 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.097 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.106 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.109 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.109 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.109 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.110 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.111 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.112 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.112 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.112 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.112 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.112 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.112 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.113 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:57.113 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:33:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63961 DF PROTO=TCP SPT=35358 DPT=9105 SEQ=1280533386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD35CE30000000001030307) Nov 28 04:33:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63962 DF PROTO=TCP SPT=35358 DPT=9105 SEQ=1280533386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD360FA0000000001030307) Nov 28 04:33:58 localhost python3.9[236243]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:33:58 localhost systemd[1]: Stopping ceilometer_agent_compute container... Nov 28 04:33:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49539 DF PROTO=TCP SPT=49276 DPT=9882 SEQ=3920736790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3620C0000000001030307) Nov 28 04:33:58 localhost systemd[1]: tmp-crun.jvXDop.mount: Deactivated successfully. Nov 28 04:33:58 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:58.915 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process Nov 28 04:33:59 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:59.017 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304 Nov 28 04:33:59 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:59.017 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308 Nov 28 04:33:59 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:59.017 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12] Nov 28 04:33:59 localhost journal[227736]: End of file while reading data: Input/output error Nov 28 04:33:59 localhost journal[227736]: End of file while reading data: Input/output error Nov 28 04:33:59 localhost ceilometer_agent_compute[235978]: 2025-11-28 09:33:59.028 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320 Nov 28 04:33:59 localhost systemd[1]: libpod-783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.scope: Deactivated successfully. Nov 28 04:33:59 localhost systemd[1]: libpod-783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.scope: Consumed 1.155s CPU time. Nov 28 04:33:59 localhost podman[236304]: 2025-11-28 09:33:59.172134301 +0000 UTC m=+0.336641272 container died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 04:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:33:59 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.timer: Deactivated successfully. Nov 28 04:33:59 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:33:59 localhost podman[236325]: 2025-11-28 09:33:59.254113441 +0000 UTC m=+0.063067702 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 04:33:59 localhost podman[236326]: 2025-11-28 09:33:59.264690974 +0000 UTC m=+0.069150827 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent) Nov 28 04:33:59 localhost podman[236326]: 2025-11-28 09:33:59.296304362 +0000 UTC m=+0.100764225 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 04:33:59 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:33:59 localhost podman[236325]: 2025-11-28 09:33:59.313396185 +0000 UTC m=+0.122350436 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:33:59 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:33:59 localhost podman[236304]: 2025-11-28 09:33:59.389327828 +0000 UTC m=+0.553834749 container cleanup 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:33:59 localhost podman[236304]: ceilometer_agent_compute Nov 28 04:33:59 localhost podman[236372]: 2025-11-28 09:33:59.492483405 +0000 UTC m=+0.072958504 container cleanup 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible) Nov 28 04:33:59 localhost podman[236372]: ceilometer_agent_compute Nov 28 04:33:59 localhost systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully. Nov 28 04:33:59 localhost systemd[1]: Stopped ceilometer_agent_compute container. Nov 28 04:33:59 localhost systemd[1]: Starting ceilometer_agent_compute container... Nov 28 04:33:59 localhost systemd[1]: Started libcrun container. Nov 28 04:33:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79357f94827d1342ad406bdeb4e36d95f97d18c5be3690def0d45192dec0b1fd/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff) Nov 28 04:33:59 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79357f94827d1342ad406bdeb4e36d95f97d18c5be3690def0d45192dec0b1fd/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff) Nov 28 04:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:33:59 localhost podman[236386]: 2025-11-28 09:33:59.64232584 +0000 UTC m=+0.119852739 container init 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + sudo -E kolla_set_configs Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: sudo: unable to send audit message: Operation not permitted Nov 28 04:33:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:33:59 localhost podman[236386]: 2025-11-28 09:33:59.685108759 +0000 UTC m=+0.162635678 container start 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 04:33:59 localhost podman[236386]: ceilometer_agent_compute Nov 28 04:33:59 localhost systemd[1]: Started ceilometer_agent_compute container. Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Validating config file Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Copying service configuration files Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: INFO:__main__:Writing out command to execute Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: ++ cat /run_command Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + ARGS= Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + sudo kolla_copy_cacerts Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: sudo: unable to send audit message: Operation not permitted Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + [[ ! -n '' ]] Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + . kolla_extend_start Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout' Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\''' Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + umask 0022 Nov 28 04:33:59 localhost ceilometer_agent_compute[236400]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout Nov 28 04:33:59 localhost podman[236409]: 2025-11-28 09:33:59.760056132 +0000 UTC m=+0.080551436 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 04:33:59 localhost podman[236409]: 2025-11-28 09:33:59.79264006 +0000 UTC m=+0.113135354 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:33:59 localhost podman[236409]: unhealthy Nov 28 04:33:59 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:33:59 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:34:00 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 04:34:00 localhost rhsm-service[6576]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server. Nov 28 04:34:00 localhost python3.9[236542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.430 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.430 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.430 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.430 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.431 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.432 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.433 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.434 2 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.435 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.436 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.437 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.438 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.439 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.440 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.441 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.442 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.443 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.458 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']]. Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.459 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d]. Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.460 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']]. Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.471 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.583 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.583 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.583 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.583 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] config_dir = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] config_file = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.584 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.585 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] log_dir = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] log_file = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.586 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.587 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] sample_source = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.588 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.589 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.590 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.591 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.592 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.593 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.594 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.595 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.596 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.597 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.598 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.599 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.600 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.601 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.605 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.612 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:34:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:34:00 localhost python3.9[236637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322439.95991-1547-65483499259578/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:34:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=591 DF PROTO=TCP SPT=50378 DPT=9105 SEQ=3544091224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD36CFA0000000001030307) Nov 28 04:34:01 localhost python3.9[236747]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False Nov 28 04:34:02 localhost python3.9[236857]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:34:04 localhost python3[236967]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:34:04 localhost podman[237005]: Nov 28 04:34:04 localhost podman[237005]: 2025-11-28 09:34:04.549909887 +0000 UTC m=+0.062206574 container create 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors ) Nov 28 04:34:04 localhost podman[237005]: 2025-11-28 09:34:04.522859249 +0000 UTC m=+0.035155926 image pull quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c Nov 28 04:34:04 localhost python3[236967]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl Nov 28 04:34:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63964 DF PROTO=TCP SPT=35358 DPT=9105 SEQ=1280533386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD378BA0000000001030307) Nov 28 04:34:06 localhost python3.9[237152]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:34:06 localhost python3.9[237264]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:34:07 localhost python3.9[237373]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322447.0152671-1706-48334662590504/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:34:08 localhost python3.9[237428]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:34:08 localhost systemd[1]: Reloading. Nov 28 04:34:08 localhost systemd-rc-local-generator[237455]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:34:08 localhost systemd-sysv-generator[237459]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:08 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost python3.9[237519]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:34:09 localhost systemd[1]: Reloading. Nov 28 04:34:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54587 DF PROTO=TCP SPT=55084 DPT=9102 SEQ=285978181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD38A3A0000000001030307) Nov 28 04:34:09 localhost systemd-rc-local-generator[237547]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:34:09 localhost systemd-sysv-generator[237551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:09 localhost systemd[1]: Starting node_exporter container... Nov 28 04:34:09 localhost systemd[1]: Started libcrun container. Nov 28 04:34:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:34:09 localhost podman[237559]: 2025-11-28 09:34:09.659189929 +0000 UTC m=+0.157672064 container init 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.675Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.675Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.675Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.675Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.675Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.675Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=arp Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=bcache Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=bonding Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=btrfs Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=conntrack Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=cpu Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=cpufreq Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=diskstats Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=edac Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=fibrechannel Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=filefd Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=filesystem Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=infiniband Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=ipvs Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=loadavg Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=mdadm Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=meminfo Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=netclass Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=netdev Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=netstat Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=nfs Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=nfsd Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=nvme Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=schedstat Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=sockstat Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=softnet Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.676Z caller=node_exporter.go:117 level=info collector=systemd Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=node_exporter.go:117 level=info collector=tapestats Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=node_exporter.go:117 level=info collector=udp_queues Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=node_exporter.go:117 level=info collector=vmstat Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=node_exporter.go:117 level=info collector=xfs Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=node_exporter.go:117 level=info collector=zfs Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Nov 28 04:34:09 localhost node_exporter[237574]: ts=2025-11-28T09:34:09.677Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Nov 28 04:34:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:34:09 localhost podman[237559]: 2025-11-28 09:34:09.69856168 +0000 UTC m=+0.197043785 container start 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:34:09 localhost podman[237559]: node_exporter Nov 28 04:34:09 localhost systemd[1]: Started node_exporter container. Nov 28 04:34:09 localhost podman[237583]: 2025-11-28 09:34:09.770839872 +0000 UTC m=+0.071493888 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:34:09 localhost podman[237583]: 2025-11-28 09:34:09.778689937 +0000 UTC m=+0.079343913 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:34:09 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:34:10 localhost python3.9[237715]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:34:10 localhost systemd[1]: Stopping node_exporter container... Nov 28 04:34:10 localhost systemd[1]: libpod-56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.scope: Deactivated successfully. Nov 28 04:34:10 localhost podman[237719]: 2025-11-28 09:34:10.863387211 +0000 UTC m=+0.068592437 container died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:34:10 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.timer: Deactivated successfully. Nov 28 04:34:10 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:34:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686-userdata-shm.mount: Deactivated successfully. Nov 28 04:34:10 localhost podman[237719]: 2025-11-28 09:34:10.906724876 +0000 UTC m=+0.111930092 container cleanup 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:34:10 localhost podman[237719]: node_exporter Nov 28 04:34:10 localhost systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Nov 28 04:34:10 localhost podman[237744]: 2025-11-28 09:34:10.996591098 +0000 UTC m=+0.052265946 container cleanup 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:34:10 localhost podman[237744]: node_exporter Nov 28 04:34:11 localhost systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'. Nov 28 04:34:11 localhost systemd[1]: Stopped node_exporter container. Nov 28 04:34:11 localhost systemd[1]: Starting node_exporter container... Nov 28 04:34:11 localhost systemd[1]: Started libcrun container. Nov 28 04:34:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:34:11 localhost podman[237755]: 2025-11-28 09:34:11.173217713 +0000 UTC m=+0.136569493 container init 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.187Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)" Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.187Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)" Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.187Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required." Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.188Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.188Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice) Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.188Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$ Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.188Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/) Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$ Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:110 level=info msg="Enabled collectors" Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=arp Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=bcache Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=bonding Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=btrfs Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=conntrack Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=cpu Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=cpufreq Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=diskstats Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=edac Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=fibrechannel Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=filefd Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=filesystem Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=infiniband Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=ipvs Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=loadavg Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=mdadm Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=meminfo Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=netclass Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=netdev Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=netstat Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=nfs Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=nfsd Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=nvme Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=schedstat Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=sockstat Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=softnet Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=systemd Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=tapestats Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=udp_queues Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=vmstat Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=xfs Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.189Z caller=node_exporter.go:117 level=info collector=zfs Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.190Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100 Nov 28 04:34:11 localhost node_exporter[237770]: ts=2025-11-28T09:34:11.190Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100 Nov 28 04:34:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:34:11 localhost podman[237755]: 2025-11-28 09:34:11.229277957 +0000 UTC m=+0.192629737 container start 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:34:11 localhost podman[237755]: node_exporter Nov 28 04:34:11 localhost systemd[1]: Started node_exporter container. Nov 28 04:34:11 localhost podman[237779]: 2025-11-28 09:34:11.291352509 +0000 UTC m=+0.078914100 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:34:11 localhost podman[237779]: 2025-11-28 09:34:11.297268585 +0000 UTC m=+0.084830156 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:34:11 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:34:11 localhost python3.9[237911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:34:12 localhost python3.9[237999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322451.4644952-1802-17870683863356/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:34:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63965 DF PROTO=TCP SPT=35358 DPT=9105 SEQ=1280533386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD398FA0000000001030307) Nov 28 04:34:13 localhost python3.9[238109]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False Nov 28 04:34:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59816 DF PROTO=TCP SPT=33896 DPT=9101 SEQ=2320142314 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD39DE10000000001030307) Nov 28 04:34:14 localhost python3.9[238219]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:34:15 localhost python3[238329]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:34:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53415 DF PROTO=TCP SPT=48794 DPT=9100 SEQ=2158328996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3A6FA0000000001030307) Nov 28 04:34:17 localhost podman[238343]: 2025-11-28 09:34:15.348547865 +0000 UTC m=+0.029961629 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Nov 28 04:34:17 localhost podman[238414]: Nov 28 04:34:17 localhost podman[238414]: 2025-11-28 09:34:17.373427351 +0000 UTC m=+0.084557397 container create d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:34:17 localhost podman[238414]: 2025-11-28 09:34:17.334879475 +0000 UTC m=+0.046009521 image pull quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Nov 28 04:34:17 localhost python3[238329]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd Nov 28 04:34:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:34:18 localhost systemd[1]: tmp-crun.HyI5Da.mount: Deactivated successfully. Nov 28 04:34:18 localhost podman[238562]: 2025-11-28 09:34:18.890773449 +0000 UTC m=+0.089972225 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:34:18 localhost podman[238562]: 2025-11-28 09:34:18.905735098 +0000 UTC m=+0.104933824 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:34:18 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:34:18 localhost python3.9[238563]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:34:19 localhost python3.9[238694]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:34:20 localhost python3.9[238803]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322459.8480713-1960-68330817643105/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:34:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53416 DF PROTO=TCP SPT=48794 DPT=9100 SEQ=2158328996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3B6BA0000000001030307) Nov 28 04:34:21 localhost python3.9[238858]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:34:21 localhost systemd[1]: Reloading. Nov 28 04:34:21 localhost systemd-rc-local-generator[238881]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:34:21 localhost systemd-sysv-generator[238888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:21 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost python3.9[238948]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:34:22 localhost systemd[1]: Reloading. Nov 28 04:34:22 localhost systemd-rc-local-generator[238972]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:34:22 localhost systemd-sysv-generator[238975]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:34:22 localhost systemd[1]: Starting podman_exporter container... Nov 28 04:34:22 localhost systemd[1]: Started libcrun container. Nov 28 04:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:34:22 localhost podman[238988]: 2025-11-28 09:34:22.619052395 +0000 UTC m=+0.176736600 container init d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:34:22 localhost podman_exporter[239000]: ts=2025-11-28T09:34:22.640Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Nov 28 04:34:22 localhost podman_exporter[239000]: ts=2025-11-28T09:34:22.640Z caller=exporter.go:69 level=info msg=metrics enhanced=false Nov 28 04:34:22 localhost podman_exporter[239000]: ts=2025-11-28T09:34:22.640Z caller=handler.go:94 level=info msg="enabled collectors" Nov 28 04:34:22 localhost podman_exporter[239000]: ts=2025-11-28T09:34:22.640Z caller=handler.go:105 level=info collector=container Nov 28 04:34:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:34:22 localhost podman[238988]: 2025-11-28 09:34:22.664061563 +0000 UTC m=+0.221745768 container start d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:34:22 localhost podman[238988]: podman_exporter Nov 28 04:34:22 localhost systemd[1]: Starting Podman API Service... Nov 28 04:34:22 localhost systemd[1]: Started Podman API Service. Nov 28 04:34:22 localhost systemd[1]: Started podman_exporter container. Nov 28 04:34:22 localhost podman[239012]: time="2025-11-28T09:34:22Z" level=info msg="/usr/bin/podman filtering at log level info" Nov 28 04:34:22 localhost podman[239011]: 2025-11-28 09:34:22.755784083 +0000 UTC m=+0.086074984 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:34:22 localhost podman[239011]: 2025-11-28 09:34:22.767577442 +0000 UTC m=+0.097868353 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:34:22 localhost podman[239011]: unhealthy Nov 28 04:34:22 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:34:22 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:34:22 localhost podman[239012]: time="2025-11-28T09:34:22Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" Nov 28 04:34:22 localhost podman[239012]: time="2025-11-28T09:34:22Z" level=info msg="Setting parallel job count to 25" Nov 28 04:34:22 localhost podman[239012]: time="2025-11-28T09:34:22Z" level=info msg="Using systemd socket activation to determine API endpoint" Nov 28 04:34:22 localhost podman[239012]: time="2025-11-28T09:34:22Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\"" Nov 28 04:34:22 localhost podman[239012]: @ - - [28/Nov/2025:09:34:22 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Nov 28 04:34:22 localhost podman[239012]: time="2025-11-28T09:34:22Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:34:23 localhost python3.9[239159]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:34:23 localhost systemd[1]: Stopping podman_exporter container... Nov 28 04:34:23 localhost podman[239012]: @ - - [28/Nov/2025:09:34:22 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 0 "" "Go-http-client/1.1" Nov 28 04:34:23 localhost systemd[1]: libpod-d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.scope: Deactivated successfully. Nov 28 04:34:23 localhost podman[239163]: 2025-11-28 09:34:23.642587235 +0000 UTC m=+0.050892453 container died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:34:23 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.timer: Deactivated successfully. Nov 28 04:34:23 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:34:23 localhost systemd[1]: tmp-crun.YKwvLr.mount: Deactivated successfully. Nov 28 04:34:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7-userdata-shm.mount: Deactivated successfully. Nov 28 04:34:24 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:34:24 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:34:25 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:34:26 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:34:27 localhost systemd[1]: var-lib-containers-storage-overlay-3c501e2e0af2cd58357bae9f85ce4f24459167783b2ef7b06790cc9f118dfa87-merged.mount: Deactivated successfully. Nov 28 04:34:27 localhost podman[239163]: 2025-11-28 09:34:27.004985745 +0000 UTC m=+3.413290993 container cleanup d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:34:27 localhost podman[239163]: podman_exporter Nov 28 04:34:27 localhost podman[239175]: 2025-11-28 09:34:27.021195291 +0000 UTC m=+3.359906752 container cleanup d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:34:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35887 DF PROTO=TCP SPT=38958 DPT=9105 SEQ=1038688251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3D2130000000001030307) Nov 28 04:34:27 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:34:27 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:34:27 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:34:27 localhost systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Nov 28 04:34:28 localhost podman[239190]: 2025-11-28 09:34:28.029691801 +0000 UTC m=+0.070845627 container cleanup d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:34:28 localhost podman[239190]: podman_exporter Nov 28 04:34:28 localhost systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'. Nov 28 04:34:28 localhost systemd[1]: Stopped podman_exporter container. Nov 28 04:34:28 localhost systemd[1]: Starting podman_exporter container... Nov 28 04:34:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35888 DF PROTO=TCP SPT=38958 DPT=9105 SEQ=1038688251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3D63A0000000001030307) Nov 28 04:34:28 localhost systemd[1]: Started libcrun container. Nov 28 04:34:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:34:28 localhost podman[239204]: 2025-11-28 09:34:28.73861763 +0000 UTC m=+0.299033856 container init d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:34:28 localhost podman_exporter[239219]: ts=2025-11-28T09:34:28.755Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)" Nov 28 04:34:28 localhost podman_exporter[239219]: ts=2025-11-28T09:34:28.755Z caller=exporter.go:69 level=info msg=metrics enhanced=false Nov 28 04:34:28 localhost podman[239012]: @ - - [28/Nov/2025:09:34:28 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1" Nov 28 04:34:28 localhost podman[239012]: time="2025-11-28T09:34:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:34:28 localhost podman_exporter[239219]: ts=2025-11-28T09:34:28.755Z caller=handler.go:94 level=info msg="enabled collectors" Nov 28 04:34:28 localhost podman_exporter[239219]: ts=2025-11-28T09:34:28.755Z caller=handler.go:105 level=info collector=container Nov 28 04:34:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:34:28 localhost podman[239204]: 2025-11-28 09:34:28.770202258 +0000 UTC m=+0.330618474 container start d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:34:28 localhost podman[239204]: podman_exporter Nov 28 04:34:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53417 DF PROTO=TCP SPT=48794 DPT=9100 SEQ=2158328996 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3D6FA0000000001030307) Nov 28 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:34:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:34:30 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:34:30 localhost systemd[1]: Started podman_exporter container. Nov 28 04:34:30 localhost podman[239229]: 2025-11-28 09:34:30.54116939 +0000 UTC m=+1.766452903 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:34:30 localhost podman[239229]: 2025-11-28 09:34:30.585452236 +0000 UTC m=+1.810735729 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:34:30 localhost podman[239229]: unhealthy Nov 28 04:34:30 localhost podman[239243]: 2025-11-28 09:34:30.628587915 +0000 UTC m=+0.985809861 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Nov 28 04:34:30 localhost podman[239263]: 2025-11-28 09:34:30.640562259 +0000 UTC m=+0.745282986 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 28 04:34:30 localhost podman[239242]: 2025-11-28 09:34:30.596181322 +0000 UTC m=+0.957832176 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:34:30 localhost podman[239243]: 2025-11-28 09:34:30.665574832 +0000 UTC m=+1.022796728 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 04:34:30 localhost podman[239263]: 2025-11-28 09:34:30.672402296 +0000 UTC m=+0.777123053 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 04:34:30 localhost podman[239263]: unhealthy Nov 28 04:34:30 localhost podman[239242]: 2025-11-28 09:34:30.724799746 +0000 UTC m=+1.086450600 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3) Nov 28 04:34:31 localhost python3.9[239419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:34:31 localhost python3.9[239507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322470.7686832-2056-190678931330741/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None Nov 28 04:34:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45666 DF PROTO=TCP SPT=50444 DPT=9882 SEQ=1755845825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3E33A0000000001030307) Nov 28 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:34:32 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:32 localhost python3.9[239617]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False Nov 28 04:34:33 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:34:33 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:34:33 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:34:33 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:34:33 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:34:33 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:34:33 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:33 localhost python3.9[239728]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:34:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35890 DF PROTO=TCP SPT=38958 DPT=9105 SEQ=1038688251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3EDFA0000000001030307) Nov 28 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:34 localhost python3[239838]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:34:34 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:34:35 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:34:36 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:34:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:34:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44339 DF PROTO=TCP SPT=42168 DPT=9102 SEQ=4122354115 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD3FF3A0000000001030307) Nov 28 04:34:39 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:34:39 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:40 localhost systemd[1]: var-lib-containers-storage-overlay-3584b97526356ef5b6642175730f52565e80737460bbd74a6e31729b79699070-merged.mount: Deactivated successfully. Nov 28 04:34:41 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:41 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:41 localhost podman[239012]: time="2025-11-28T09:34:41Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28/merged: invalid argument" Nov 28 04:34:41 localhost podman[239012]: time="2025-11-28T09:34:41Z" level=error msg="Getting root fs size for \"2a044fcb236ee7f5542f44e64fe793d77e8763efce6900a93b0314c6ec72e94b\": getting diffsize of layer \"e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28\" and its parent \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\": creating overlay mount to /var/lib/containers/storage/overlay/e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/YXI27YR2A6GRINAQ3FVOLZKW4B:/var/lib/containers/storage/overlay/l/5NF4JPSMHF7QZY565YBUJ3HOZG:/var/lib/containers/storage/overlay/l/4XMODH4ZBUQAKWFSLQ6LSJ6RSJ:/var/lib/containers/storage/overlay/l/XRMYXCJ3MORFVFM2M5OYFMELKD,upperdir=/var/lib/containers/storage/overlay/e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28/diff,workdir=/var/lib/containers/storage/overlay/e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28/work,nodev,metacopy=on\": no such file or directory" Nov 28 04:34:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:34:42 localhost podman[239876]: 2025-11-28 09:34:42.155008639 +0000 UTC m=+0.248791584 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:34:42 localhost podman[239876]: 2025-11-28 09:34:42.171535536 +0000 UTC m=+0.265318531 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:34:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35891 DF PROTO=TCP SPT=38958 DPT=9105 SEQ=1038688251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD40EFA0000000001030307) Nov 28 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-25ffb2916f839ff1700adaff5cb29e97302d49ba8ff980d3124f389d659473a3-merged.mount: Deactivated successfully. Nov 28 04:34:43 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45668 DF PROTO=TCP SPT=50444 DPT=9882 SEQ=1755845825 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD412FA0000000001030307) Nov 28 04:34:44 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:44 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:34:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:46 localhost nova_compute[228497]: 2025-11-28 09:34:46.391 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:46 localhost nova_compute[228497]: 2025-11-28 09:34:46.407 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:46 localhost nova_compute[228497]: 2025-11-28 09:34:46.407 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:46 localhost nova_compute[228497]: 2025-11-28 09:34:46.407 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:46 localhost nova_compute[228497]: 2025-11-28 09:34:46.407 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:46 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30037 DF PROTO=TCP SPT=48708 DPT=9100 SEQ=849604836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD41C3A0000000001030307) Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.073 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.073 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.086 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.086 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.086 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.086 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.086 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.104 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.104 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.104 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.105 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.105 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:34:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:47 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-c8beb7bd0728a5185ae08edcb0afeede0750b5c1acd8c5a453f776b712778919-merged.mount: Deactivated successfully. Nov 28 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.560 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:34:47 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.754 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.756 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13208MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.756 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.757 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.823 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:34:47 localhost nova_compute[228497]: 2025-11-28 09:34:47.823 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:34:48 localhost nova_compute[228497]: 2025-11-28 09:34:48.065 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:34:48 localhost nova_compute[228497]: 2025-11-28 09:34:48.533 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:34:48 localhost nova_compute[228497]: 2025-11-28 09:34:48.539 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:34:48 localhost nova_compute[228497]: 2025-11-28 09:34:48.560 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:34:48 localhost nova_compute[228497]: 2025-11-28 09:34:48.562 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:34:48 localhost nova_compute[228497]: 2025-11-28 09:34:48.562 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:34:48 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:34:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:34:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:49 localhost systemd[1]: tmp-crun.RYxLPG.mount: Deactivated successfully. Nov 28 04:34:49 localhost podman[239955]: 2025-11-28 09:34:49.503339934 +0000 UTC m=+0.115423451 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 04:34:49 localhost podman[239955]: 2025-11-28 09:34:49.539166035 +0000 UTC m=+0.151249542 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:34:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30038 DF PROTO=TCP SPT=48708 DPT=9100 SEQ=849604836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD42BFA0000000001030307) Nov 28 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:34:50.814 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:34:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:34:50.814 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:34:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:34:50.814 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:34:50 localhost systemd[1]: var-lib-containers-storage-overlay-3584b97526356ef5b6642175730f52565e80737460bbd74a6e31729b79699070-merged.mount: Deactivated successfully. Nov 28 04:34:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:51 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:34:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:53 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:34:53 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:53 localhost podman[239851]: 2025-11-28 09:34:35.234439484 +0000 UTC m=+0.052966588 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Nov 28 04:34:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:55 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:34:55 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:56 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:57 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:34:57 localhost podman[239012]: time="2025-11-28T09:34:57Z" level=error msg="Getting root fs size for \"2e97d8a51625064e6f06a470ec8e1c443497ab99753302611140ab63dcf05711\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy" Nov 28 04:34:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24827 DF PROTO=TCP SPT=47076 DPT=9105 SEQ=3152718205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD447430000000001030307) Nov 28 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:57 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:34:57 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:57 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:34:57 localhost podman[240089]: 2025-11-28 09:34:57.122951766 +0000 UTC m=+0.038284749 image pull quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Nov 28 04:34:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24828 DF PROTO=TCP SPT=47076 DPT=9105 SEQ=3152718205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD44B3A0000000001030307) Nov 28 04:34:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50502 DF PROTO=TCP SPT=57680 DPT=9882 SEQ=3572996585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD44C6C0000000001030307) Nov 28 04:34:59 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037-merged.mount: Deactivated successfully. Nov 28 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037-merged.mount: Deactivated successfully. Nov 28 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:35:00 localhost systemd[1]: var-lib-containers-storage-overlay-25ffb2916f839ff1700adaff5cb29e97302d49ba8ff980d3124f389d659473a3-merged.mount: Deactivated successfully. Nov 28 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63967 DF PROTO=TCP SPT=35358 DPT=9105 SEQ=1280533386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD456FA0000000001030307) Nov 28 04:35:01 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:02 localhost systemd[1]: var-lib-containers-storage-overlay-77782651c01fa3d8af8a79c02d3312e7fed09a9087964da1a7c959a65a9214b8-merged.mount: Deactivated successfully. Nov 28 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:35:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:35:03 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:03 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:03 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:03 localhost podman[240104]: 2025-11-28 09:35:03.370984519 +0000 UTC m=+0.187758385 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 28 04:35:03 localhost podman[240104]: 2025-11-28 09:35:03.382514209 +0000 UTC m=+0.199288005 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible) Nov 28 04:35:03 localhost podman[240102]: 2025-11-28 09:35:03.354546744 +0000 UTC m=+0.172265439 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 04:35:03 localhost podman[240102]: 2025-11-28 09:35:03.440494173 +0000 UTC m=+0.258212848 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:35:03 localhost podman[240102]: unhealthy Nov 28 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:03 localhost systemd[1]: var-lib-containers-storage-overlay-c8beb7bd0728a5185ae08edcb0afeede0750b5c1acd8c5a453f776b712778919-merged.mount: Deactivated successfully. Nov 28 04:35:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24830 DF PROTO=TCP SPT=47076 DPT=9105 SEQ=3152718205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD462FA0000000001030307) Nov 28 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:05 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:05 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:35:05 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:05 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:05 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:35:05 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:35:05 localhost podman[240103]: 2025-11-28 09:35:05.998959002 +0000 UTC m=+2.815830081 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 04:35:06 localhost podman[240103]: 2025-11-28 09:35:06.037829228 +0000 UTC m=+2.854700277 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:35:06 localhost podman[240105]: 2025-11-28 09:35:06.055095718 +0000 UTC m=+2.866883928 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:35:06 localhost podman[240105]: 2025-11-28 09:35:06.067510607 +0000 UTC m=+2.879298837 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:35:06 localhost podman[240105]: unhealthy Nov 28 04:35:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:07 localhost podman[240089]: Nov 28 04:35:07 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:35:07 localhost podman[240089]: 2025-11-28 09:35:07.846289295 +0000 UTC m=+10.761622238 container create 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:35:07 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:35:07 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-92a670f87546e9222dc3530777cbcbb6bd2a424665ad22aef150e174bea9c765-merged.mount: Deactivated successfully. Nov 28 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:08 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:08 localhost python3[239838]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7 Nov 28 04:35:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60077 DF PROTO=TCP SPT=41392 DPT=9102 SEQ=2698449811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4747A0000000001030307) Nov 28 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:35:09 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:35:09 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:10 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:10 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:11 localhost python3.9[240313]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:35:11 localhost systemd[1]: var-lib-containers-storage-overlay-78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38-merged.mount: Deactivated successfully. Nov 28 04:35:12 localhost python3.9[240425]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:35:12 localhost python3.9[240534]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322512.1301556-2215-188458169425784/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:35:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24831 DF PROTO=TCP SPT=47076 DPT=9105 SEQ=3152718205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD482FA0000000001030307) Nov 28 04:35:12 localhost systemd[1]: var-lib-containers-storage-overlay-58fd8127cd9da7f4875f0be3b8ee189ddf406ac3663dca02ef65d88f989bc037-merged.mount: Deactivated successfully. Nov 28 04:35:13 localhost python3.9[240589]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:35:13 localhost systemd[1]: Reloading. Nov 28 04:35:13 localhost systemd-rc-local-generator[240611]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:35:13 localhost systemd-sysv-generator[240618]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:13 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42877 DF PROTO=TCP SPT=39300 DPT=9101 SEQ=2931178254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD488410000000001030307) Nov 28 04:35:14 localhost python3.9[240679]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:35:14 localhost systemd[1]: Reloading. Nov 28 04:35:14 localhost systemd-sysv-generator[240707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:35:14 localhost systemd-rc-local-generator[240704]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:35:14 localhost systemd[1]: Starting openstack_network_exporter container... Nov 28 04:35:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:14 localhost podman[240719]: 2025-11-28 09:35:14.800477957 +0000 UTC m=+0.143516234 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:35:14 localhost podman[240719]: 2025-11-28 09:35:14.839513357 +0000 UTC m=+0.182551654 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:15 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:16 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:16 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:35:16 localhost systemd[1]: Started libcrun container. Nov 28 04:35:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be0dafad38c026533af5bb8627b1fd421c08a935c7735a8b660de0bf6d8ee84/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 28 04:35:16 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be0dafad38c026533af5bb8627b1fd421c08a935c7735a8b660de0bf6d8ee84/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Nov 28 04:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:35:16 localhost podman[240721]: 2025-11-28 09:35:16.257698362 +0000 UTC m=+1.594410230 container init 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7) Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *bridge.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *coverage.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *datapath.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *iface.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *memory.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *ovnnorthd.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *ovn.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *ovsdbserver.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *pmd_perf.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *pmd_rxq.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: INFO 09:35:16 main.go:48: registering *vswitch.Collector Nov 28 04:35:16 localhost openstack_network_exporter[240755]: NOTICE 09:35:16 main.go:82: listening on http://:9105/metrics Nov 28 04:35:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:35:16 localhost podman[240721]: 2025-11-28 09:35:16.297112645 +0000 UTC m=+1.633824533 container start 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Nov 28 04:35:16 localhost podman[240721]: openstack_network_exporter Nov 28 04:35:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63619 DF PROTO=TCP SPT=42538 DPT=9100 SEQ=3505838984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4917A0000000001030307) Nov 28 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:17 localhost systemd[1]: Started openstack_network_exporter container. Nov 28 04:35:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:17 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:17 localhost podman[240765]: 2025-11-28 09:35:17.343574195 +0000 UTC m=+1.043112636 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, architecture=x86_64, release=1755695350, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 04:35:17 localhost podman[240765]: 2025-11-28 09:35:17.367006084 +0000 UTC m=+1.066544515 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 28 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:18 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:18 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-77782651c01fa3d8af8a79c02d3312e7fed09a9087964da1a7c959a65a9214b8-merged.mount: Deactivated successfully. Nov 28 04:35:18 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:19 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63620 DF PROTO=TCP SPT=42538 DPT=9100 SEQ=3505838984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4A13A0000000001030307) Nov 28 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-92a670f87546e9222dc3530777cbcbb6bd2a424665ad22aef150e174bea9c765-merged.mount: Deactivated successfully. Nov 28 04:35:20 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:35:21 localhost podman[240898]: 2025-11-28 09:35:21.604254136 +0000 UTC m=+0.098752894 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 04:35:21 localhost podman[240898]: 2025-11-28 09:35:21.622695437 +0000 UTC m=+0.117194175 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3) Nov 28 04:35:21 localhost python3.9[240897]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:35:21 localhost systemd[1]: var-lib-containers-storage-overlay-95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0-merged.mount: Deactivated successfully. Nov 28 04:35:21 localhost systemd[1]: Stopping openstack_network_exporter container... Nov 28 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 28 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 28 04:35:22 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 28 04:35:22 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:35:22 localhost systemd[1]: libpod-6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.scope: Deactivated successfully. Nov 28 04:35:22 localhost podman[240917]: 2025-11-28 09:35:22.382131571 +0000 UTC m=+0.527957489 container stop 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:35:22 localhost podman[240917]: 2025-11-28 09:35:22.412780997 +0000 UTC m=+0.558606935 container died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 04:35:22 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.timer: Deactivated successfully. Nov 28 04:35:22 localhost systemd[1]: Stopped /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf-userdata-shm.mount: Deactivated successfully. Nov 28 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 28 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 28 04:35:23 localhost podman[240917]: 2025-11-28 09:35:23.320506365 +0000 UTC m=+1.466332263 container cleanup 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container) Nov 28 04:35:23 localhost podman[240917]: openstack_network_exporter Nov 28 04:35:23 localhost podman[240932]: 2025-11-28 09:35:23.341333582 +0000 UTC m=+0.946285224 container cleanup 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-5be0dafad38c026533af5bb8627b1fd421c08a935c7735a8b660de0bf6d8ee84-merged.mount: Deactivated successfully. Nov 28 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:35:23 localhost systemd[1]: var-lib-containers-storage-overlay-78fa405dc0dc1392ab1503db8559712d0c057956ae06af0d32d5b9d343fe4a38-merged.mount: Deactivated successfully. Nov 28 04:35:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:24 localhost systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Nov 28 04:35:24 localhost podman[240946]: 2025-11-28 09:35:24.1072361 +0000 UTC m=+0.052923169 container cleanup 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, version=9.6, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 28 04:35:24 localhost podman[240946]: openstack_network_exporter Nov 28 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:24 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:26 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:26 localhost systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'. Nov 28 04:35:26 localhost systemd[1]: Stopped openstack_network_exporter container. Nov 28 04:35:26 localhost systemd[1]: Starting openstack_network_exporter container... Nov 28 04:35:27 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49613 DF PROTO=TCP SPT=54566 DPT=9105 SEQ=3331576412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4BC730000000001030307) Nov 28 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49614 DF PROTO=TCP SPT=54566 DPT=9105 SEQ=3331576412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4C07A0000000001030307) Nov 28 04:35:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63621 DF PROTO=TCP SPT=42538 DPT=9100 SEQ=3505838984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4C0FA0000000001030307) Nov 28 04:35:28 localhost systemd[1]: Started libcrun container. Nov 28 04:35:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be0dafad38c026533af5bb8627b1fd421c08a935c7735a8b660de0bf6d8ee84/merged/run/ovn supports timestamps until 2038 (0x7fffffff) Nov 28 04:35:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5be0dafad38c026533af5bb8627b1fd421c08a935c7735a8b660de0bf6d8ee84/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff) Nov 28 04:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:35:28 localhost podman[240959]: 2025-11-28 09:35:28.872802883 +0000 UTC m=+2.150699512 container init 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter) Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *bridge.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *coverage.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *datapath.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *iface.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *memory.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *ovnnorthd.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *ovn.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *ovsdbserver.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *pmd_perf.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *pmd_rxq.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: INFO 09:35:28 main.go:48: registering *vswitch.Collector Nov 28 04:35:28 localhost openstack_network_exporter[240973]: NOTICE 09:35:28 main.go:82: listening on http://:9105/metrics Nov 28 04:35:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:35:28 localhost podman[240959]: 2025-11-28 09:35:28.924113171 +0000 UTC m=+2.202009770 container start 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_id=edpm) Nov 28 04:35:28 localhost podman[240959]: openstack_network_exporter Nov 28 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 28 04:35:29 localhost systemd[1]: var-lib-containers-storage-overlay-59a9aab44f9035abb8c665d33ff9cced94beb2960f79a3a6871873d8e649ac58-merged.mount: Deactivated successfully. Nov 28 04:35:29 localhost systemd[1]: Started openstack_network_exporter container. Nov 28 04:35:30 localhost podman[240983]: 2025-11-28 09:35:30.000432232 +0000 UTC m=+1.082058113 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible) Nov 28 04:35:30 localhost podman[240983]: 2025-11-28 09:35:30.009131936 +0000 UTC m=+1.090757797 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm) Nov 28 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:30 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:35:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35893 DF PROTO=TCP SPT=38958 DPT=9105 SEQ=1038688251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4CCFA0000000001030307) Nov 28 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:35:31 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:35:32 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-95337b0ee1bc2060bec425b9be63b35b01d68f1de2bac6065e353d72be5388e0-merged.mount: Deactivated successfully. Nov 28 04:35:33 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:35:33 localhost python3.9[241111]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 28 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 28 04:35:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:34 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49616 DF PROTO=TCP SPT=54566 DPT=9105 SEQ=3331576412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4D83A0000000001030307) Nov 28 04:35:34 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 28 04:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:35 localhost systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully. Nov 28 04:35:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:35:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:36 localhost systemd[1]: var-lib-containers-storage-overlay-93bf0314bbd4063198be021c760bb47b8172c6cfa3163da2b90a6f202605824f-merged.mount: Deactivated successfully. Nov 28 04:35:36 localhost systemd[1]: tmp-crun.C6sC7G.mount: Deactivated successfully. Nov 28 04:35:36 localhost podman[241130]: 2025-11-28 09:35:36.503806843 +0000 UTC m=+0.102317935 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Nov 28 04:35:36 localhost podman[241130]: 2025-11-28 09:35:36.537544077 +0000 UTC m=+0.136055139 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 28 04:35:36 localhost podman[241130]: unhealthy Nov 28 04:35:36 localhost podman[241131]: 2025-11-28 09:35:36.553262672 +0000 UTC m=+0.150806904 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:35:36 localhost podman[241131]: 2025-11-28 09:35:36.583208986 +0000 UTC m=+0.180753208 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 04:35:37 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:35:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:38 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:38 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:35:38 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:35:38 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:35:38 localhost podman[241163]: 2025-11-28 09:35:38.767167336 +0000 UTC m=+0.864998352 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 04:35:38 localhost podman[241164]: 2025-11-28 09:35:38.843185912 +0000 UTC m=+0.939135699 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:35:38 localhost podman[241163]: 2025-11-28 09:35:38.850315757 +0000 UTC m=+0.948146733 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller) Nov 28 04:35:38 localhost podman[241164]: 2025-11-28 09:35:38.906929081 +0000 UTC m=+1.002878918 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:35:38 localhost podman[241164]: unhealthy Nov 28 04:35:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37191 DF PROTO=TCP SPT=49690 DPT=9102 SEQ=2840877608 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4E9BD0000000001030307) Nov 28 04:35:39 localhost systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully. Nov 28 04:35:39 localhost systemd[1]: var-lib-containers-storage-overlay-59a9aab44f9035abb8c665d33ff9cced94beb2960f79a3a6871873d8e649ac58-merged.mount: Deactivated successfully. Nov 28 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:40 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:41 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:35:41 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:35:41 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:35:41 localhost systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully. Nov 28 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:35:42 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:35:43 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49617 DF PROTO=TCP SPT=54566 DPT=9105 SEQ=3331576412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4F8FA0000000001030307) Nov 28 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:43 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:35:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12054 DF PROTO=TCP SPT=39340 DPT=9882 SEQ=4066141312 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD4FCFA0000000001030307) Nov 28 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:44 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:35:46 localhost podman[241208]: 2025-11-28 09:35:46.499887373 +0000 UTC m=+0.104763943 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:35:46 localhost podman[241208]: 2025-11-28 09:35:46.529990041 +0000 UTC m=+0.134866591 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:35:46 localhost nova_compute[228497]: 2025-11-28 09:35:46.550 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53433 DF PROTO=TCP SPT=40716 DPT=9100 SEQ=3842616379 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD506BA0000000001030307) Nov 28 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully. Nov 28 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-93bf0314bbd4063198be021c760bb47b8172c6cfa3163da2b90a6f202605824f-merged.mount: Deactivated successfully. Nov 28 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:46 localhost systemd[1]: var-lib-containers-storage-overlay-c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a-merged.mount: Deactivated successfully. Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.075 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.097 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.097 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.097 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.098 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.098 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:35:47 localhost systemd[1]: var-lib-containers-storage-overlay-dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2-merged.mount: Deactivated successfully. Nov 28 04:35:47 localhost systemd[1]: var-lib-containers-storage-overlay-5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d-merged.mount: Deactivated successfully. Nov 28 04:35:47 localhost systemd[1]: var-lib-containers-storage-overlay-5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d-merged.mount: Deactivated successfully. Nov 28 04:35:47 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.567 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.755 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.756 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13211MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.756 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.756 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.836 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.836 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:35:47 localhost nova_compute[228497]: 2025-11-28 09:35:47.861 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:35:48 localhost nova_compute[228497]: 2025-11-28 09:35:48.331 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:35:48 localhost nova_compute[228497]: 2025-11-28 09:35:48.338 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:35:48 localhost nova_compute[228497]: 2025-11-28 09:35:48.363 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:35:48 localhost nova_compute[228497]: 2025-11-28 09:35:48.366 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:35:48 localhost nova_compute[228497]: 2025-11-28 09:35:48.366 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:35:48 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:35:48 localhost systemd[1]: var-lib-containers-storage-overlay-dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2-merged.mount: Deactivated successfully. Nov 28 04:35:48 localhost systemd[1]: var-lib-containers-storage-overlay-dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2-merged.mount: Deactivated successfully. Nov 28 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.361 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.362 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.362 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.362 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.375 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.375 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.375 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:35:49 localhost nova_compute[228497]: 2025-11-28 09:35:49.376 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Nov 28 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:35:49 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53434 DF PROTO=TCP SPT=40716 DPT=9100 SEQ=3842616379 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5167A0000000001030307) Nov 28 04:35:50 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:35:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:35:50.815 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:35:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:35:50.815 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:35:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:35:50.816 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:35:51 localhost systemd[1]: var-lib-containers-storage-overlay-5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d-merged.mount: Deactivated successfully. Nov 28 04:35:51 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:35:52 localhost systemd[1]: tmp-crun.Z9mRrW.mount: Deactivated successfully. Nov 28 04:35:52 localhost podman[241274]: 2025-11-28 09:35:52.978980639 +0000 UTC m=+0.087138307 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 28 04:35:52 localhost podman[241274]: 2025-11-28 09:35:52.989967015 +0000 UTC m=+0.098124653 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125) Nov 28 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:54 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:35:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:54 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:55 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:56 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:35:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:56 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:57 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:35:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24064 DF PROTO=TCP SPT=33840 DPT=9105 SEQ=2313935200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD531A20000000001030307) Nov 28 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:35:58 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:35:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24065 DF PROTO=TCP SPT=33840 DPT=9105 SEQ=2313935200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD535BA0000000001030307) Nov 28 04:35:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29863 DF PROTO=TCP SPT=36170 DPT=9882 SEQ=314207492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD536CC0000000001030307) Nov 28 04:35:59 localhost systemd[1]: var-lib-containers-storage-overlay-c649278c2e5a474424d7d5698a840ae7cdf6b8243f9150d8a362719bce70699a-merged.mount: Deactivated successfully. Nov 28 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2-merged.mount: Deactivated successfully. Nov 28 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d-merged.mount: Deactivated successfully. Nov 28 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d-merged.mount: Deactivated successfully. Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.613 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.613 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.613 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.613 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.614 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:36:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:36:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:36:00 localhost systemd[1]: var-lib-containers-storage-overlay-b290307da7690cf991f1186b07b34a264d1d07b861913129e99370229181e3a2-merged.mount: Deactivated successfully. Nov 28 04:36:00 localhost podman[241377]: 2025-11-28 09:36:00.9393327 +0000 UTC m=+0.086599701 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.) Nov 28 04:36:00 localhost podman[241377]: 2025-11-28 09:36:00.953453495 +0000 UTC m=+0.100720486 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64) Nov 28 04:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2-merged.mount: Deactivated successfully. Nov 28 04:36:01 localhost systemd[1]: var-lib-containers-storage-overlay-dc5b8b4def912dce4d14a76402b323c6b5c48ee8271230eacbdaaa7e58e676b2-merged.mount: Deactivated successfully. Nov 28 04:36:01 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:36:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24833 DF PROTO=TCP SPT=47076 DPT=9105 SEQ=3152718205 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD540FA0000000001030307) Nov 28 04:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:36:02 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Nov 28 04:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:36:03 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-5ea32d7a444086a7f1ea2479bd7b214a5adab9651f7d4df1f24a039ae5563f9d-merged.mount: Deactivated successfully. Nov 28 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:04 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24067 DF PROTO=TCP SPT=33840 DPT=9105 SEQ=2313935200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD54D7A0000000001030307) Nov 28 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:05 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:06 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:07 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:36:08 localhost systemd[1]: var-lib-containers-storage-overlay-b2363e42c8cc93f560c242c278a1b76f810df60301763e880790aefc5b17b52f-merged.mount: Deactivated successfully. Nov 28 04:36:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:36:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:36:08 localhost podman[241397]: 2025-11-28 09:36:08.980144805 +0000 UTC m=+0.093393815 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:36:09 localhost podman[241396]: 2025-11-28 09:36:09.046493147 +0000 UTC m=+0.163025419 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true) Nov 28 04:36:09 localhost podman[241397]: 2025-11-28 09:36:09.064110962 +0000 UTC m=+0.177360011 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:36:09 localhost podman[241396]: 2025-11-28 09:36:09.076226424 +0000 UTC m=+0.192758716 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 04:36:09 localhost podman[241396]: unhealthy Nov 28 04:36:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57651 DF PROTO=TCP SPT=35162 DPT=9102 SEQ=3386133729 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD55EFA0000000001030307) Nov 28 04:36:09 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:09 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:10 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:36:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:36:11 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:36:11 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:36:11 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:36:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:11 localhost podman[241431]: 2025-11-28 09:36:11.276051254 +0000 UTC m=+0.082514192 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:36:11 localhost podman[241431]: 2025-11-28 09:36:11.334491696 +0000 UTC m=+0.140954684 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 04:36:11 localhost podman[241432]: 2025-11-28 09:36:11.339283026 +0000 UTC m=+0.143299877 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:36:11 localhost podman[241432]: 2025-11-28 09:36:11.423605624 +0000 UTC m=+0.227622465 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:36:11 localhost podman[241432]: unhealthy Nov 28 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:12 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24068 DF PROTO=TCP SPT=33840 DPT=9105 SEQ=2313935200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD56CFA0000000001030307) Nov 28 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:13 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:13 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:36:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:13 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:13 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:36:13 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:36:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9554 DF PROTO=TCP SPT=44628 DPT=9101 SEQ=769887207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD572A00000000001030307) Nov 28 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:14 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:15 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully. Nov 28 04:36:16 localhost systemd[1]: var-lib-containers-storage-overlay-b290307da7690cf991f1186b07b34a264d1d07b861913129e99370229181e3a2-merged.mount: Deactivated successfully. Nov 28 04:36:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55048 DF PROTO=TCP SPT=37938 DPT=9100 SEQ=3728342459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD57BFB0000000001030307) Nov 28 04:36:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:36:17 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:17 localhost systemd[1]: tmp-crun.VnhZuq.mount: Deactivated successfully. Nov 28 04:36:18 localhost podman[241476]: 2025-11-28 09:36:18.0057864 +0000 UTC m=+0.111287178 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:36:18 localhost podman[241476]: 2025-11-28 09:36:18.041581989 +0000 UTC m=+0.147082807 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:36:18 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:18 localhost systemd[1]: var-lib-containers-storage-overlay-0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8-merged.mount: Deactivated successfully. Nov 28 04:36:19 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:19 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 28 04:36:20 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 28 04:36:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55049 DF PROTO=TCP SPT=37938 DPT=9100 SEQ=3728342459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD58BBB0000000001030307) Nov 28 04:36:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:21 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:22 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 28 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:23 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:36:24 localhost systemd[1]: var-lib-containers-storage-overlay-b2363e42c8cc93f560c242c278a1b76f810df60301763e880790aefc5b17b52f-merged.mount: Deactivated successfully. Nov 28 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:24 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:36:24 localhost podman[241499]: 2025-11-28 09:36:24.556675659 +0000 UTC m=+0.076974668 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Nov 28 04:36:24 localhost podman[241499]: 2025-11-28 09:36:24.567967154 +0000 UTC m=+0.088266233 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd) Nov 28 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:25 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:26 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:26 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:26 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:26 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:36:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10883 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2277982890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5A6D30000000001030307) Nov 28 04:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10884 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2277982890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5AAFA0000000001030307) Nov 28 04:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 28 04:36:28 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:28 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62685 DF PROTO=TCP SPT=43380 DPT=9882 SEQ=503015964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5ABFC0000000001030307) Nov 28 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-7c29bfa5a0679179b90046634e87037ab6ff6f22b5fa7106d9841b0f8caae33b-merged.mount: Deactivated successfully. Nov 28 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:29 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 28 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:30 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49619 DF PROTO=TCP SPT=54566 DPT=9105 SEQ=3331576412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5B6FB0000000001030307) Nov 28 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 28 04:36:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-2465f602934c9b49eec4e0598b6266084474df0f2da0f1de92a72390c7a9be21-merged.mount: Deactivated successfully. Nov 28 04:36:31 localhost systemd[1]: var-lib-containers-storage-overlay-2465f602934c9b49eec4e0598b6266084474df0f2da0f1de92a72390c7a9be21-merged.mount: Deactivated successfully. Nov 28 04:36:31 localhost podman[241516]: 2025-11-28 09:36:31.880156853 +0000 UTC m=+0.087859689 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:36:31 localhost podman[241516]: 2025-11-28 09:36:31.91011141 +0000 UTC m=+0.117814226 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter) Nov 28 04:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-0f50e6e193badeb95447e2c9ef73121ac91dbd5780ab99ca29933bd60e5eb8a8-merged.mount: Deactivated successfully. Nov 28 04:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:33 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:34 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:36:34 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10886 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2277982890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5C2BA0000000001030307) Nov 28 04:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:35 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 28 04:36:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:36 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:36 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 28 04:36:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 28 04:36:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2810 DF PROTO=TCP SPT=59150 DPT=9102 SEQ=2295612562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5D3FA0000000001030307) Nov 28 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:39 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:40 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-0818f18800ef844a6a48c8a7ece9ae523d5aa6f809095a5eb180d408e8d636c4-merged.mount: Deactivated successfully. Nov 28 04:36:41 localhost systemd[1]: var-lib-containers-storage-overlay-0818f18800ef844a6a48c8a7ece9ae523d5aa6f809095a5eb180d408e8d636c4-merged.mount: Deactivated successfully. Nov 28 04:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:36:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:36:41 localhost podman[241533]: 2025-11-28 09:36:41.316855912 +0000 UTC m=+0.075258962 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:36:41 localhost podman[241533]: 2025-11-28 09:36:41.405566668 +0000 UTC m=+0.163969668 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:36:41 localhost podman[241533]: unhealthy Nov 28 04:36:41 localhost podman[241534]: 2025-11-28 09:36:41.40747724 +0000 UTC m=+0.162992267 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:36:41 localhost podman[241534]: 2025-11-28 09:36:41.487259007 +0000 UTC m=+0.242774094 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully. Nov 28 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-7c29bfa5a0679179b90046634e87037ab6ff6f22b5fa7106d9841b0f8caae33b-merged.mount: Deactivated successfully. Nov 28 04:36:42 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 28 04:36:42 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:36:42 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:36:42 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:36:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10887 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2277982890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5E2FA0000000001030307) Nov 28 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:36:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:36:43 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 28 04:36:43 localhost podman[241568]: 2025-11-28 09:36:43.886591054 +0000 UTC m=+0.087872870 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:36:43 localhost podman[241568]: 2025-11-28 09:36:43.918241196 +0000 UTC m=+0.119523032 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:36:43 localhost podman[241568]: unhealthy Nov 28 04:36:43 localhost podman[241567]: 2025-11-28 09:36:43.931871207 +0000 UTC m=+0.135028603 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 04:36:43 localhost podman[241567]: 2025-11-28 09:36:43.998610033 +0000 UTC m=+0.201767399 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:36:44 localhost systemd[1]: tmp-crun.0psnv7.mount: Deactivated successfully. Nov 28 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 28 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8821 DF PROTO=TCP SPT=48714 DPT=9101 SEQ=4107005301 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5E7D10000000001030307) Nov 28 04:36:44 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:44 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:44 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:36:44 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:36:44 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:36:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:45 localhost nova_compute[228497]: 2025-11-28 09:36:45.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:45 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully. Nov 28 04:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-2465f602934c9b49eec4e0598b6266084474df0f2da0f1de92a72390c7a9be21-merged.mount: Deactivated successfully. Nov 28 04:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:46 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-2465f602934c9b49eec4e0598b6266084474df0f2da0f1de92a72390c7a9be21-merged.mount: Deactivated successfully. Nov 28 04:36:46 localhost nova_compute[228497]: 2025-11-28 09:36:46.069 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:46 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59321 DF PROTO=TCP SPT=47654 DPT=9100 SEQ=546634030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD5F0FA0000000001030307) Nov 28 04:36:47 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:47 localhost nova_compute[228497]: 2025-11-28 09:36:47.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:48 localhost nova_compute[228497]: 2025-11-28 09:36:48.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:48 localhost nova_compute[228497]: 2025-11-28 09:36:48.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:36:48 localhost nova_compute[228497]: 2025-11-28 09:36:48.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:36:48 localhost nova_compute[228497]: 2025-11-28 09:36:48.092 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:36:48 localhost nova_compute[228497]: 2025-11-28 09:36:48.093 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 28 04:36:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-af4441ca58e5a3dae70e850402577fe72fc0370c205d9690db9c04c01d30a59b-merged.mount: Deactivated successfully. Nov 28 04:36:48 localhost podman[241613]: 2025-11-28 09:36:48.487370776 +0000 UTC m=+0.077906068 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:36:48 localhost podman[241613]: 2025-11-28 09:36:48.527023506 +0000 UTC m=+0.117558788 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:36:48 localhost systemd[1]: var-lib-containers-storage-overlay-af4441ca58e5a3dae70e850402577fe72fc0370c205d9690db9c04c01d30a59b-merged.mount: Deactivated successfully. Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.075 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.075 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.092 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.093 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.093 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.093 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.094 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.555 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.732 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.734 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13117MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.734 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.734 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.819 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.821 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:36:49 localhost nova_compute[228497]: 2025-11-28 09:36:49.849 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:36:50 localhost nova_compute[228497]: 2025-11-28 09:36:50.339 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:36:50 localhost nova_compute[228497]: 2025-11-28 09:36:50.346 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:50 localhost nova_compute[228497]: 2025-11-28 09:36:50.369 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:36:50 localhost nova_compute[228497]: 2025-11-28 09:36:50.373 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:36:50 localhost nova_compute[228497]: 2025-11-28 09:36:50.374 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59322 DF PROTO=TCP SPT=47654 DPT=9100 SEQ=546634030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD600BB0000000001030307) Nov 28 04:36:50 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:50 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:50 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:50 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:36:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:36:50.816 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:36:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:36:50.817 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:36:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:36:50.817 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:36:51 localhost nova_compute[228497]: 2025-11-28 09:36:51.371 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:51 localhost nova_compute[228497]: 2025-11-28 09:36:51.371 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:36:51 localhost nova_compute[228497]: 2025-11-28 09:36:51.372 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:52 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:52 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:36:52 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:52 localhost podman[239012]: time="2025-11-28T09:36:52Z" level=error msg="Getting root fs size for \"929c9b5315b4fc33e01978423e59cb5b383ecd56e91c5f891b7c011283bec432\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": unmounting layer f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958: replacing mount point \"/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged\": device or resource busy" Nov 28 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:53 localhost systemd[1]: session-55.scope: Deactivated successfully. Nov 28 04:36:53 localhost systemd[1]: session-55.scope: Consumed 58.594s CPU time. Nov 28 04:36:53 localhost systemd-logind[763]: Session 55 logged out. Waiting for processes to exit. Nov 28 04:36:53 localhost systemd-logind[763]: Removed session 55. Nov 28 04:36:53 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:54 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073-merged.mount: Deactivated successfully. Nov 28 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-d63efe17da859108a09d9b90626ba0c433787abe209cd4ac755f6ba2a5206671-merged.mount: Deactivated successfully. Nov 28 04:36:55 localhost systemd[1]: var-lib-containers-storage-overlay-d8443c9fdf039c2367e44e0edbe81c941f30f604c3f1eccc2fc81efb5a97a784-merged.mount: Deactivated successfully. Nov 28 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-d63efe17da859108a09d9b90626ba0c433787abe209cd4ac755f6ba2a5206671-merged.mount: Deactivated successfully. Nov 28 04:36:56 localhost systemd[1]: var-lib-containers-storage-overlay-d63efe17da859108a09d9b90626ba0c433787abe209cd4ac755f6ba2a5206671-merged.mount: Deactivated successfully. Nov 28 04:36:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:36:56 localhost podman[241679]: 2025-11-28 09:36:56.995267311 +0000 UTC m=+0.101737357 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:36:57 localhost podman[241679]: 2025-11-28 09:36:57.004696255 +0000 UTC m=+0.111166291 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 28 04:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Nov 28 04:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:36:57 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59094 DF PROTO=TCP SPT=53552 DPT=9105 SEQ=3078962166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD61C030000000001030307) Nov 28 04:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-0818f18800ef844a6a48c8a7ece9ae523d5aa6f809095a5eb180d408e8d636c4-merged.mount: Deactivated successfully. Nov 28 04:36:57 localhost systemd[1]: var-lib-containers-storage-overlay-0818f18800ef844a6a48c8a7ece9ae523d5aa6f809095a5eb180d408e8d636c4-merged.mount: Deactivated successfully. Nov 28 04:36:57 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:36:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59095 DF PROTO=TCP SPT=53552 DPT=9105 SEQ=3078962166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD61FFB0000000001030307) Nov 28 04:36:58 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59323 DF PROTO=TCP SPT=47654 DPT=9100 SEQ=546634030 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD620FA0000000001030307) Nov 28 04:36:58 localhost systemd[1]: var-lib-containers-storage-overlay-d8443c9fdf039c2367e44e0edbe81c941f30f604c3f1eccc2fc81efb5a97a784-merged.mount: Deactivated successfully. Nov 28 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 28 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:36:59 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:37:00 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:00 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:01 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:01 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34849 DF PROTO=TCP SPT=55970 DPT=9882 SEQ=3799603440 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD62D3A0000000001030307) Nov 28 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:02 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:02 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:03 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:37:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:37:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59097 DF PROTO=TCP SPT=53552 DPT=9105 SEQ=3078962166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD637BA0000000001030307) Nov 28 04:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:04 localhost podman[241783]: 2025-11-28 09:37:04.69281235 +0000 UTC m=+0.062064506 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=) Nov 28 04:37:04 localhost podman[241783]: 2025-11-28 09:37:04.703828125 +0000 UTC m=+0.073080331 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.) Nov 28 04:37:04 localhost systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully. Nov 28 04:37:04 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:37:05 localhost systemd[1]: var-lib-containers-storage-overlay-af4441ca58e5a3dae70e850402577fe72fc0370c205d9690db9c04c01d30a59b-merged.mount: Deactivated successfully. Nov 28 04:37:06 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:07 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:07 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:08 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:08 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:09 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:37:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30372 DF PROTO=TCP SPT=45784 DPT=9102 SEQ=1342579902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6493A0000000001030307) Nov 28 04:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:10 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:11 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:11 localhost systemd[1]: var-lib-containers-storage-overlay-16fa74df54e8af55fadaf755a0b31aeec0d57923866d5b313d9489d8d758e3e8-merged.mount: Deactivated successfully. Nov 28 04:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 28 04:37:12 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 28 04:37:13 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59098 DF PROTO=TCP SPT=53552 DPT=9105 SEQ=3078962166 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD658FB0000000001030307) Nov 28 04:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:37:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:13 localhost podman[241804]: 2025-11-28 09:37:13.725615201 +0000 UTC m=+0.066946934 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 04:37:13 localhost podman[241803]: 2025-11-28 09:37:13.78193207 +0000 UTC m=+0.125965620 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 04:37:13 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:13 localhost podman[241803]: 2025-11-28 09:37:13.811154305 +0000 UTC m=+0.155187835 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 28 04:37:13 localhost podman[241803]: unhealthy Nov 28 04:37:13 localhost podman[241804]: 2025-11-28 09:37:13.863548846 +0000 UTC m=+0.204880649 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:37:14 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:14 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34851 DF PROTO=TCP SPT=55970 DPT=9882 SEQ=3799603440 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD65CFB0000000001030307) Nov 28 04:37:14 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:37:14 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:37:14 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:14 localhost systemd[1]: var-lib-containers-storage-overlay-0ab58418a3b33798bab22812a6bf35faf1a05b29cb02b615b8bae9fef6fe9073-merged.mount: Deactivated successfully. Nov 28 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:37:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:37:14 localhost systemd[1]: tmp-crun.e7bOxj.mount: Deactivated successfully. Nov 28 04:37:14 localhost podman[241841]: 2025-11-28 09:37:14.894229111 +0000 UTC m=+0.089253924 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:37:14 localhost podman[241841]: 2025-11-28 09:37:14.907378516 +0000 UTC m=+0.102403309 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:37:14 localhost podman[241841]: unhealthy Nov 28 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-d63efe17da859108a09d9b90626ba0c433787abe209cd4ac755f6ba2a5206671-merged.mount: Deactivated successfully. Nov 28 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-d8443c9fdf039c2367e44e0edbe81c941f30f604c3f1eccc2fc81efb5a97a784-merged.mount: Deactivated successfully. Nov 28 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 28 04:37:15 localhost systemd[1]: var-lib-containers-storage-overlay-6e24aa22dcc3c3aeaf326f993725e399e8f3215a32c5fb5c28a2698bed898907-merged.mount: Deactivated successfully. Nov 28 04:37:15 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:37:15 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:37:16 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48729 DF PROTO=TCP SPT=41940 DPT=9100 SEQ=3455075542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6663A0000000001030307) Nov 28 04:37:16 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-d8443c9fdf039c2367e44e0edbe81c941f30f604c3f1eccc2fc81efb5a97a784-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:17 localhost podman[241840]: 2025-11-28 09:37:17.973621866 +0000 UTC m=+3.175972895 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 28 04:37:18 localhost podman[241840]: 2025-11-28 09:37:18.028399986 +0000 UTC m=+3.230751005 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:37:18 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:37:18 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-55d5530fe8468c8c9907e0aa1de030811941604fa5f46de3db6dc15ec40906dd-merged.mount: Deactivated successfully. Nov 28 04:37:19 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:37:20 localhost systemd[1]: var-lib-containers-storage-overlay-ae0ebe7656e29542866ff018f5be9a3d02c88268a65814cf045e1b6c30ffd352-merged.mount: Deactivated successfully. Nov 28 04:37:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:20 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:20 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48730 DF PROTO=TCP SPT=41940 DPT=9100 SEQ=3455075542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD675FA0000000001030307) Nov 28 04:37:20 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:37:20 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:20 localhost podman[241886]: 2025-11-28 09:37:20.895486433 +0000 UTC m=+0.099881437 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:37:20 localhost podman[241886]: 2025-11-28 09:37:20.927925121 +0000 UTC m=+0.132320105 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:37:21 localhost systemd[1]: var-lib-containers-storage-overlay-876187b8bc68a02fc79261d7a49dfade5cc37ca730d23b4f758fcf788c522d06-merged.mount: Deactivated successfully. Nov 28 04:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:22 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:22 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:24 localhost systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully. Nov 28 04:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:25 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:26 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:26 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:27 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:27 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6554 DF PROTO=TCP SPT=51392 DPT=9105 SEQ=1980500986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD691330000000001030307) Nov 28 04:37:27 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:37:28 localhost podman[241907]: 2025-11-28 09:37:28.492790965 +0000 UTC m=+0.093529010 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:37:28 localhost podman[241907]: 2025-11-28 09:37:28.506356002 +0000 UTC m=+0.107094047 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:37:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6555 DF PROTO=TCP SPT=51392 DPT=9105 SEQ=1980500986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6953B0000000001030307) Nov 28 04:37:28 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55352 DF PROTO=TCP SPT=57020 DPT=9882 SEQ=3460966633 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6965D0000000001030307) Nov 28 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-558adb40dc3f0c457c124ec6699b165daa74a355f52d98e7436d696b86369c63-merged.mount: Deactivated successfully. Nov 28 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully. Nov 28 04:37:29 localhost systemd[1]: var-lib-containers-storage-overlay-16fa74df54e8af55fadaf755a0b31aeec0d57923866d5b313d9489d8d758e3e8-merged.mount: Deactivated successfully. Nov 28 04:37:29 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:37:30 localhost systemd[1]: var-lib-containers-storage-overlay-16fa74df54e8af55fadaf755a0b31aeec0d57923866d5b313d9489d8d758e3e8-merged.mount: Deactivated successfully. Nov 28 04:37:31 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:31 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 28 04:37:31 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 28 04:37:31 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10889 DF PROTO=TCP SPT=59496 DPT=9105 SEQ=2277982890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6A0FA0000000001030307) Nov 28 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:32 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:37:33 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully. Nov 28 04:37:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6557 DF PROTO=TCP SPT=51392 DPT=9105 SEQ=1980500986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6ACFA0000000001030307) Nov 28 04:37:34 localhost systemd[1]: var-lib-containers-storage-overlay-6e24aa22dcc3c3aeaf326f993725e399e8f3215a32c5fb5c28a2698bed898907-merged.mount: Deactivated successfully. Nov 28 04:37:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:37:35 localhost podman[241926]: 2025-11-28 09:37:35.457756317 +0000 UTC m=+0.073878224 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, distribution-scope=public) Nov 28 04:37:35 localhost podman[241926]: 2025-11-28 09:37:35.468838865 +0000 UTC m=+0.084960762 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 28 04:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:37:35 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:37:35 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:37:36 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully. Nov 28 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-69bca6b1ae1a510e610471f91dc39084eac5a14908c47996b36473212637590d-merged.mount: Deactivated successfully. Nov 28 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:37 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:37:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:38 localhost kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79-merged.mount: Deactivated successfully. Nov 28 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:38 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53635 DF PROTO=TCP SPT=54900 DPT=9102 SEQ=1277939374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6BE7B0000000001030307) Nov 28 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully. Nov 28 04:37:39 localhost systemd[1]: var-lib-containers-storage-overlay-876187b8bc68a02fc79261d7a49dfade5cc37ca730d23b4f758fcf788c522d06-merged.mount: Deactivated successfully. Nov 28 04:37:40 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 28 04:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:37:41 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully. Nov 28 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:42 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6558 DF PROTO=TCP SPT=51392 DPT=9105 SEQ=1980500986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6CCFA0000000001030307) Nov 28 04:37:42 localhost systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully. Nov 28 04:37:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:43 localhost systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully. Nov 28 04:37:44 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47315 DF PROTO=TCP SPT=41096 DPT=9101 SEQ=3060849528 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6D2310000000001030307) Nov 28 04:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:37:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully. Nov 28 04:37:44 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:44 localhost podman[241946]: 2025-11-28 09:37:44.749111796 +0000 UTC m=+0.096464053 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:37:44 localhost podman[241946]: 2025-11-28 09:37:44.778595903 +0000 UTC m=+0.125948140 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.109 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.110 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.110 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 04:37:45 localhost nova_compute[228497]: 2025-11-28 09:37:45.137 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:45 localhost systemd[1]: tmp-crun.eJdsHo.mount: Deactivated successfully. Nov 28 04:37:45 localhost systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully. Nov 28 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully. Nov 28 04:37:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-8bb70caa6e9f6f4d170c3a5868421bfc24d38542c14f6a27a1edf3bcfdc45d32-merged.mount: Deactivated successfully. Nov 28 04:37:46 localhost systemd[1]: var-lib-containers-storage-overlay-8bb70caa6e9f6f4d170c3a5868421bfc24d38542c14f6a27a1edf3bcfdc45d32-merged.mount: Deactivated successfully. Nov 28 04:37:46 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:37:46 localhost podman[241945]: 2025-11-28 09:37:46.472129808 +0000 UTC m=+1.826089902 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:37:46 localhost podman[241945]: 2025-11-28 09:37:46.501700788 +0000 UTC m=+1.855660882 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Nov 28 04:37:46 localhost podman[241945]: unhealthy Nov 28 04:37:46 localhost podman[241971]: 2025-11-28 09:37:46.51829685 +0000 UTC m=+0.242958478 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:37:46 localhost podman[241971]: 2025-11-28 09:37:46.52941731 +0000 UTC m=+0.254078908 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:37:46 localhost podman[241971]: unhealthy Nov 28 04:37:46 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53413 DF PROTO=TCP SPT=36928 DPT=9100 SEQ=3824350269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6DB7A0000000001030307) Nov 28 04:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully. Nov 28 04:37:47 localhost systemd[1]: var-lib-containers-storage-overlay-558adb40dc3f0c457c124ec6699b165daa74a355f52d98e7436d696b86369c63-merged.mount: Deactivated successfully. Nov 28 04:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-558adb40dc3f0c457c124ec6699b165daa74a355f52d98e7436d696b86369c63-merged.mount: Deactivated successfully. Nov 28 04:37:48 localhost nova_compute[228497]: 2025-11-28 09:37:48.161 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:48 localhost nova_compute[228497]: 2025-11-28 09:37:48.161 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:37:48 localhost nova_compute[228497]: 2025-11-28 09:37:48.161 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-f04f6aa8018da724c9daa5ca37db7cd13477323f1b725eec5dac97862d883048-merged.mount: Deactivated successfully. Nov 28 04:37:48 localhost nova_compute[228497]: 2025-11-28 09:37:48.370 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:37:48 localhost nova_compute[228497]: 2025-11-28 09:37:48.371 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:48 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:37:48 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Failed with result 'exit-code'. Nov 28 04:37:48 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Main process exited, code=exited, status=1/FAILURE Nov 28 04:37:48 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Failed with result 'exit-code'. Nov 28 04:37:48 localhost systemd[1]: var-lib-containers-storage-overlay-b574f97f279779c52df37c61d993141d596fdb6544fa700fbddd8f35f27a4d3b-merged.mount: Deactivated successfully. Nov 28 04:37:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:37:49 localhost podman[242000]: 2025-11-28 09:37:49.982683751 +0000 UTC m=+0.086558092 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 04:37:50 localhost nova_compute[228497]: 2025-11-28 09:37:50.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:50 localhost nova_compute[228497]: 2025-11-28 09:37:50.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:50 localhost nova_compute[228497]: 2025-11-28 09:37:50.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:50 localhost podman[242000]: 2025-11-28 09:37:50.075422246 +0000 UTC m=+0.179296607 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 28 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-47afe78ba3ac18f156703d7ad9e4be64941a9d1bd472a4c2a59f4f2c3531ee35-merged.mount: Deactivated successfully. Nov 28 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-f04f6aa8018da724c9daa5ca37db7cd13477323f1b725eec5dac97862d883048-merged.mount: Deactivated successfully. Nov 28 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:37:50 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53414 DF PROTO=TCP SPT=36928 DPT=9100 SEQ=3824350269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD6EB3B0000000001030307) Nov 28 04:37:50 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:37:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:37:50.817 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:37:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:37:50.818 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:37:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:37:50.818 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-f04f6aa8018da724c9daa5ca37db7cd13477323f1b725eec5dac97862d883048-merged.mount: Deactivated successfully. Nov 28 04:37:50 localhost systemd[1]: var-lib-containers-storage-overlay-f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449-merged.mount: Deactivated successfully. Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.095 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.095 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.095 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.095 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.096 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.565 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.720 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.721 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=13151MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.721 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.722 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.817 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.817 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.896 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.942 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.942 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.957 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.978 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:37:51 localhost nova_compute[228497]: 2025-11-28 09:37:51.994 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:37:52 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:37:52 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:37:52 localhost systemd[1]: var-lib-containers-storage-overlay-47afe78ba3ac18f156703d7ad9e4be64941a9d1bd472a4c2a59f4f2c3531ee35-merged.mount: Deactivated successfully. Nov 28 04:37:52 localhost systemd[1]: var-lib-containers-storage-overlay-c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309-merged.mount: Deactivated successfully. Nov 28 04:37:52 localhost nova_compute[228497]: 2025-11-28 09:37:52.495 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:37:52 localhost nova_compute[228497]: 2025-11-28 09:37:52.500 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:37:52 localhost nova_compute[228497]: 2025-11-28 09:37:52.515 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:37:52 localhost nova_compute[228497]: 2025-11-28 09:37:52.516 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:37:52 localhost nova_compute[228497]: 2025-11-28 09:37:52.517 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:37:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:37:53 localhost podman[242069]: 2025-11-28 09:37:53.245708082 +0000 UTC m=+0.100828921 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:37:53 localhost podman[242069]: 2025-11-28 09:37:53.275388875 +0000 UTC m=+0.130509724 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66-merged.mount: Deactivated successfully. Nov 28 04:37:53 localhost systemd[1]: var-lib-containers-storage-overlay-f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a-merged.mount: Deactivated successfully. Nov 28 04:37:53 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:53 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:53 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/console\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/console: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/faillock\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/faillock: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/motd.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/motd.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/sepermit\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/run/sepermit: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log: no such file or directory" Nov 28 04:37:54 localhost kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:54 localhost kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior. Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/dnf.rpm.log\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/dnf.rpm.log: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/hawkey.log\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/hawkey.log: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/dnf.librepo.log\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/dnf.librepo.log: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/dnf.log\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/log/dnf.log: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/history.sqlite\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/history.sqlite: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/history.sqlite-shm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/history.sqlite-shm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/history.sqlite-wal\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/history.sqlite-wal: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/appstream-831abc7e9d6a1a72\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/appstream-831abc7e9d6a1a72: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/appstream-831abc7e9d6a1a72/countme\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/appstream-831abc7e9d6a1a72/countme: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/baseos-044cae74d71fe9ea\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/baseos-044cae74d71fe9ea: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/baseos-044cae74d71fe9ea/countme\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/baseos-044cae74d71fe9ea/countme: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/epel-low-priority-4b20c555de8aed94\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/epel-low-priority-4b20c555de8aed94: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/epel-low-priority-4b20c555de8aed94/countme\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/epel-low-priority-4b20c555de8aed94/countme: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/extras-common-581a10b8a62294e3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/extras-common-581a10b8a62294e3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/extras-common-581a10b8a62294e3/countme\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/dnf/repos/extras-common-581a10b8a62294e3/countme: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm/rpmdb.sqlite\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm/rpmdb.sqlite: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm/rpmdb.sqlite-shm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm/rpmdb.sqlite-shm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm/rpmdb.sqlite-wal\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/lib/rpm/rpmdb.sqlite-wal: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/db\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/db: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/db/sudo\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/db/sudo: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/db/sudo/lectured\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/db/sudo/lectured: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/kolla\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/kolla: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/libvirt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/libvirt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/qemu\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/qemu: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/hugetlbfs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/spool/mail/hugetlbfs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/cache\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/cache: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/cache/ldconfig\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/cache/ldconfig: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/cache/ldconfig/aux-cache\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/var/cache/ldconfig/aux-cache: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/adjtime.rpmnew\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/adjtime.rpmnew: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/ld.so.cache\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/ld.so.cache: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudo.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudo.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/gshadow-\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/gshadow-: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subgid\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subgid: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subgid-\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subgid-: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/group-\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/group-: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/gshadow\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/gshadow: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudo-ldap.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudo-ldap.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/libuser.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/libuser.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/shadow-\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/shadow-: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subuid\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subuid: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/faillock.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/faillock.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pwquality.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pwquality.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/time.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/time.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/namespace.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/namespace.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/access.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/access.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.apps\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.apps: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.perms\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.perms: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.perms.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.perms.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/group.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/group.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/limits.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/limits.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/limits.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/limits.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/namespace.init\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/namespace.init: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/opasswd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/opasswd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pwquality.conf.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pwquality.conf.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/sepermit.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/sepermit.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/chroot.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/chroot.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.handlers\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/console.handlers: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/namespace.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/namespace.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pam_env.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pam_env.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pwhistory.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/security/pwhistory.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/profile.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/profile.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/profile.d/which2.csh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/profile.d/which2.csh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/profile.d/which2.sh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/profile.d/which2.sh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/config-util\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/config-util: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/su\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/su: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/chfn\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/chfn: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/chsh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/chsh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/fingerprint-auth\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/fingerprint-auth: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/login\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/login: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Getting root fs size for \"acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d\": getting diffsize of layer \"06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66\" and its parent \"cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa\": creating overlay mount to /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/YH5WONIZBGF5MKGBVLYJQMRTV6,upperdir=/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff,workdir=/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/work,nodev,metacopy=on\": no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/runuser\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/runuser: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/su-l\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/su-l: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/sudo-i\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/sudo-i: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/other\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/other: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/remote\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/remote: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/runuser-l\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/runuser-l: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/smartcard-auth\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/smartcard-auth: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/sudo\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/sudo: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/system-auth\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/system-auth: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/password-auth\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/password-auth: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/postlogin\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/pam.d/postlogin: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/passwd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/passwd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/nsswitch.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/nsswitch.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subuid-\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/subuid-: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudoers.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudoers.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/group\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/group: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/passwd-\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/passwd-: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudoers\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/sudoers: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf/protected.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf/protected.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf/protected.d/sudo.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf/protected.d/sudo.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf/dnf.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/dnf/dnf.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/shadow\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/etc/shadow: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/openstack\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/openstack: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/NEWS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/NEWS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/README\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/README: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/TODO\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/TODO: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/example.ini\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/example.ini: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/crudini/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/html\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/html: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/html/index.html\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/html/index.html: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/html/style.css\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/html/style.css: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/Changelog\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/Changelog: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/README\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/doc/python-iniparse/README: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libeconf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libeconf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libeconf/LICENSE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libeconf/LICENSE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libdb\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libdb: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libdb/LICENSE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libdb/LICENSE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libdb/lgpl-2.1.txt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libdb/lgpl-2.1.txt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/dumb-init\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/dumb-init: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/dumb-init/LICENSE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/dumb-init/LICENSE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/pam\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/pam: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/pam/Copyright\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/pam/Copyright: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/pam/gpl-2.0.txt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/pam/gpl-2.0.txt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libpwquality\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libpwquality: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libpwquality/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libpwquality/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/python-iniparse\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/python-iniparse: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/python-iniparse/LICENSE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/python-iniparse/LICENSE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/python-iniparse/LICENSE-PSF\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/python-iniparse/LICENSE-PSF: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/sudo\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/sudo: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/sudo/LICENSE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/sudo/LICENSE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libuser\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libuser: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libuser/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libuser/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/procps-ng\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/procps-ng: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/procps-ng/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/procps-ng/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/procps-ng/COPYING.LIB\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/procps-ng/COPYING.LIB: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.BSD-3-Clause\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.BSD-3-Clause: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.BSD-4-Clause-UC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.BSD-4-Clause-UC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.GPL-2.0-or-later\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.GPL-2.0-or-later: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.GPL-3.0-or-later\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.GPL-3.0-or-later: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.ISC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.ISC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.LGPL-2.1-or-later\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/util-linux/COPYING.LGPL-2.1-or-later: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/cracklib\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/cracklib: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/cracklib/COPYING.LIB\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/cracklib/COPYING.LIB: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/which\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/which: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/which/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/which/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/openssl\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/openssl: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/openssl/LICENSE.txt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/openssl/LICENSE.txt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libfdisk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libfdisk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libfdisk/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libfdisk/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libfdisk/COPYING.LGPL-2.1-or-later\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libfdisk/COPYING.LGPL-2.1-or-later: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libutempter\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libutempter: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libutempter/COPYING\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/licenses/libutempter/COPYING: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fincore\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fincore: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/nsenter\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/nsenter: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rename\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rename: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/uuidparse\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/uuidparse: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/wipefs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/wipefs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsmem\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsmem: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/pivot_root\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/pivot_root: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rfkill\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rfkill: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/wall\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/wall: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/namei\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/namei: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chsh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chsh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/hwclock\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/hwclock: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ipcmk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ipcmk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/readprofile\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/readprofile: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/uuidgen\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/uuidgen: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chmem\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chmem: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lslogins\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lslogins: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/swapoff\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/swapoff: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/eject\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/eject: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/zramctl\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/zramctl: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chrt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chrt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/colrm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/colrm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fdisk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fdisk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ipcs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ipcs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/logger\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/logger: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkfs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkfs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/renice\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/renice: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/col\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/col: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsck\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsck: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkfs.minix\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkfs.minix: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkswap\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkswap: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fstrim\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fstrim: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ldattach\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ldattach: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mesg\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mesg: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/runuser\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/runuser: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/prlimit\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/prlimit: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/colcrt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/colcrt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/losetup\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/losetup: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsirq\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsirq: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lslocks\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lslocks: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mcookie\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mcookie: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rev\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rev: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setarch\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setarch: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blkid\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blkid: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/cfdisk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/cfdisk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/swaplabel\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/swaplabel: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/swapon\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/swapon: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/taskset\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/taskset: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blkdiscard\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blkdiscard: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blockdev\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blockdev: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkfs.cramfs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mkfs.cramfs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/addpart\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/addpart: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chcpu\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chcpu: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ctrlaltdel\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ctrlaltdel: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsck.minix\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsck.minix: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsipc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsipc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/partx\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/partx: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsck.cramfs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsck.cramfs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/isosize\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/isosize: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/last\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/last: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lscpu\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lscpu: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rtcwake\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/rtcwake: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/getopt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/getopt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/hardlink\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/hardlink: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ionice\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ionice: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/resizepart\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/resizepart: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/sfdisk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/sfdisk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ul\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ul: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blkzone\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/blkzone: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/unshare\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/unshare: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/column\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/column: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/wdctl\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/wdctl: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ipcrm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/ipcrm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/irqtop\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/irqtop: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/look\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/look: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsns\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsns: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/scriptreplay\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/scriptreplay: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/write\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/write: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/more\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/more: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setsid\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setsid: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setpriv\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setpriv: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/su\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/su: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsfreeze\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fsfreeze: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mountpoint\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/mountpoint: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/findmnt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/findmnt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsblk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/lsblk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/script\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/script: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chfn\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/chfn: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/dmesg\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/dmesg: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/findfs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/findfs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fallocate\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fallocate: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/utmpdump\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/utmpdump: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/whereis\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/whereis: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/delpart\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/delpart: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fdformat\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/fdformat: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/flock\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/flock: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/scriptlive\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/scriptlive: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/cal\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/cal: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/hexdump\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/hexdump: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setterm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/bash-completion/completions/setterm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/pw_dict.pwi\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/pw_dict.pwi: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib-small.hwm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib-small.hwm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib-small.pwd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib-small.pwd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib-small.pwi\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib-small.pwi: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib.magic\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/cracklib.magic: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/pw_dict.hwm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/pw_dict.hwm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/pw_dict.pwd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/cracklib/pw_dict.pwd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/pam.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/pam.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/tcib\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/share/tcib: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pkill\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pkill: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/sudo\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/sudo: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lslocks\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lslocks: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/umount\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/umount: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/fincore\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/fincore: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/free\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/free: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/linux32\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/linux32: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chrt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chrt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lastb\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lastb: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/sudoreplay\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/sudoreplay: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/watch\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/watch: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/isosize\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/isosize: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mesg\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mesg: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/rename\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/rename: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/slabtop\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/slabtop: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uuidgen\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uuidgen: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/more\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/more: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setarch\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setarch: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ipcs\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ipcs: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/prlimit\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/prlimit: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pwscore\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pwscore: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/renice\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/renice: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/hexdump\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/hexdump: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/kill\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/kill: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/dumb-init\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/dumb-init: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mountpoint\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mountpoint: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/renew-dummy-cert\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/renew-dummy-cert: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uname26\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uname26: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/unshare\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/unshare: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/colrm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/colrm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/cvtsudoers\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/cvtsudoers: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lscpu\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lscpu: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/whereis\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/whereis: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/findmnt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/findmnt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setsid\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setsid: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/write\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/write: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pidof\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pidof: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mount\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mount: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/last\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/last: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/look\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/look: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pmap\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pmap: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chmem\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chmem: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/linux64\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/linux64: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/w\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/w: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/fallocate\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/fallocate: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/login\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/login: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/rev\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/rev: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/scriptlive\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/scriptlive: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsipc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsipc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/wdctl\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/wdctl: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lslogins\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lslogins: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/su\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/su: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lchfn\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lchfn: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ps\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ps: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/cal\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/cal: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ipcrm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ipcrm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pwdx\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pwdx: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/nsenter\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/nsenter: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/col\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/col: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsirq\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsirq: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/namei\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/namei: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setterm\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setterm: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ipcmk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ipcmk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/top\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/top: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/which\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/which: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsblk\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsblk: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/flock\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/flock: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/dmesg\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/dmesg: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pgrep\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pgrep: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chsh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chsh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mcookie\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/mcookie: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uptime\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uptime: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/wall\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/wall: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/x86_64\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/x86_64: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/skill\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/skill: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ionice\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ionice: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/snice\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/snice: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uuidparse\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/uuidparse: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/script\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/script: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/i386\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/i386: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chfn\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/chfn: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/crudini\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/crudini: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/hardlink\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/hardlink: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/taskset\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/taskset: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/colcrt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/colcrt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/choom\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/choom: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/make-dummy-cert\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/make-dummy-cert: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/scriptreplay\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/scriptreplay: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/eject\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/eject: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pidwait\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pidwait: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/openssl\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/openssl: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pwmake\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/pwmake: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/sudoedit\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/sudoedit: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/getopt\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/getopt: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/vmstat\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/vmstat: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/column\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/column: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setpriv\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/setpriv: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/tload\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/tload: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/logger\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/logger: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/utmpdump\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/utmpdump: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/irqtop\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/irqtop: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lchsh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lchsh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsmem\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsmem: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsns\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/lsns: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ul\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/bin/ul: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/utempter\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/utempter: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/utempter/utempter\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/utempter/utempter: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/group_file.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/group_file.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/libsudo_util.so.0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/libsudo_util.so.0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sudo_noexec.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sudo_noexec.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sudoers.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sudoers.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/audit_json.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/audit_json.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/libsudo_util.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/libsudo_util.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/libsudo_util.so.0.0.0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/libsudo_util.so.0.0.0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sample_approval.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sample_approval.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sesh\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/sesh: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/system_group.so\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/libexec/sudo/system_group.so: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system/pam_namespace.service\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system/pam_namespace.service: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system/fstrim.service\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system/fstrim.service: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system/fstrim.timer\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/systemd/system/fstrim.timer: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/tmpfiles.d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/tmpfiles.d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/tmpfiles.d/pam.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/tmpfiles.d/pam.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/tmpfiles.d/sudo.conf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/tmpfiles.d/sudo.conf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/14\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/14: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/14/4c9ca8a045d0349fbe4927391a86f2d7dcf761\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/14/4c9ca8a045d0349fbe4927391a86f2d7dcf761: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/14/5c37f5a9ccc89098131b09148296bd7ac13ab0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/14/5c37f5a9ccc89098131b09148296bd7ac13ab0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15/2a2ebe0c623d60ef6228e73ba3098b9cce0a7a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15/2a2ebe0c623d60ef6228e73ba3098b9cce0a7a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15/4f272ea09fcb4b40238965fcac16d167461898\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15/4f272ea09fcb4b40238965fcac16d167461898: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15/898840061e6c30a02193e64f2166c48bc99155\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/15/898840061e6c30a02193e64f2166c48bc99155: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3b/4c3d6759be9e23c97a28ea07abdd571e88cd34\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3b/4c3d6759be9e23c97a28ea07abdd571e88cd34: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3b/58ff94084c438c1d0fcdab593997acb4d6aa2b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3b/58ff94084c438c1d0fcdab593997acb4d6aa2b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/45\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/45: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/45/00dde1d2c8968ed19a7a39278e7a11292a2945\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/45/00dde1d2c8968ed19a7a39278e7a11292a2945: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/45/c5bf221d3ae19055276ea851dd02ba757e3d5a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/45/c5bf221d3ae19055276ea851dd02ba757e3d5a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/46\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/46: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/46/30c961c15313564d770cc23a1607e117715946\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/46/30c961c15313564d770cc23a1607e117715946: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e1/83b7ea31562bfa3436ea76c8d502a66eb92a39\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e1/83b7ea31562bfa3436ea76c8d502a66eb92a39: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e1/c5e9b6d42ceda4c3b61be395bc22ad7ee7beee\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e1/c5e9b6d42ceda4c3b61be395bc22ad7ee7beee: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/59\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/59: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/59/252e8a7a3c8f925d45b9605be1805793384e92\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/59/252e8a7a3c8f925d45b9605be1805793384e92: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f3/7e18717bc313da1a0a1cfd7b3882d3d0c447a7\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f3/7e18717bc313da1a0a1cfd7b3882d3d0c447a7: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ec\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ec: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ec/f2497fc632dd43ce6a0bd29d3fdd228a0519fc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ec/f2497fc632dd43ce6a0bd29d3fdd228a0519fc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6b/f0e500addd5efc006db98f04a486b2f43824d1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6b/f0e500addd5efc006db98f04a486b2f43824d1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6f/54752c462f7538c83ab991a371d61801a82afa\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6f/54752c462f7538c83ab991a371d61801a82afa: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6f/ffa242074a8347039b695cbbe6d11acfb35f52\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6f/ffa242074a8347039b695cbbe6d11acfb35f52: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7f/a537b7784d9a63eec4db9fce96cc92f133cadc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7f/a537b7784d9a63eec4db9fce96cc92f133cadc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f7\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f7: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f7/b4450f86fc03d5293c3430d1909aa63bbe483b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f7/b4450f86fc03d5293c3430d1909aa63bbe483b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f7/fc1e9e32f38cfa9430496500ca3c2e58b0b7f9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f7/fc1e9e32f38cfa9430496500ca3c2e58b0b7f9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0b/036b4fce47462d760fbe5ddd95593d8eb6bf1b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0b/036b4fce47462d760fbe5ddd95593d8eb6bf1b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5b/2d7c2788cdc64e0f6b1d59a7237506eca28172\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5b/2d7c2788cdc64e0f6b1d59a7237506eca28172: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5b/e190991cd262bab77f1ace91d79f0381924ff2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5b/e190991cd262bab77f1ace91d79f0381924ff2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/92\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/92: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/92/2530cc8e438ec6b5bbd56dbe8bed1f906b2325\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/92/2530cc8e438ec6b5bbd56dbe8bed1f906b2325: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/92/39d33f80859efa086ccc5c1e209d55bdd1bf55\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/92/39d33f80859efa086ccc5c1e209d55bdd1bf55: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f5\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f5: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f5/1f9758e71666b7f3a8a251af89cedafe3eb845\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f5/1f9758e71666b7f3a8a251af89cedafe3eb845: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6a/c47a004b00bd1891c73ecae48ddc3c459bf378\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6a/c47a004b00bd1891c73ecae48ddc3c459bf378: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6a/d19969af8bba5d211c413be29a6918f345109b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6a/d19969af8bba5d211c413be29a6918f345109b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a8/2e2b0f96ff1bc114067196146fa473104d2b5e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a8/2e2b0f96ff1bc114067196146fa473104d2b5e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b6\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b6: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b6/dccfab4944fb284428ce74bb6510ea77861e9c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b6/dccfab4944fb284428ce74bb6510ea77861e9c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b6/f4c929bb4296b957784cc7dab0f5ba63b7af1c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b6/f4c929bb4296b957784cc7dab0f5ba63b7af1c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9/7632fc725bf917f235d8fd8d9cec7ed849dd4c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9/7632fc725bf917f235d8fd8d9cec7ed849dd4c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9/93b95067c8c4e9fc0c3e46a107308029abd5a6\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9/93b95067c8c4e9fc0c3e46a107308029abd5a6: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9/caebb6a887eb763ded31ead03749712860ffa9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b9/caebb6a887eb763ded31ead03749712860ffa9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/19\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/19: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/19/e0aa1f3d6b4a3a3503574398261f32004d8386\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/19/e0aa1f3d6b4a3a3503574398261f32004d8386: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7/f5bc803701ac92cf44ef9c0fb8b9146f0e4795\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7/f5bc803701ac92cf44ef9c0fb8b9146f0e4795: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7/fadb8192eb83d1a51f02869000e47be3640a2f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7/fadb8192eb83d1a51f02869000e47be3640a2f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7/6c3ab9ad2eef282817ed5e8138ada94610f225\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b7/6c3ab9ad2eef282817ed5e8138ada94610f225: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0c/ab14fabb5747947340f306da22482235815799\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0c/ab14fabb5747947340f306da22482235815799: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7a/b3795a4e53531ac7011ddfb16959d8d8b549a4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7a/b3795a4e53531ac7011ddfb16959d8d8b549a4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c2/4b0d30a6fb7869da102c529afd6d316e0ea338\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c2/4b0d30a6fb7869da102c529afd6d316e0ea338: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/49\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/49: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/49/9cee2296786420301a271fc12acd2d64c40dc0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/49/9cee2296786420301a271fc12acd2d64c40dc0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a/4554c73944306a3dd62ed113634be016dbc513\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a/4554c73944306a3dd62ed113634be016dbc513: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a/4572434ba92404c12db316b384dfc38d89f59e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a/4572434ba92404c12db316b384dfc38d89f59e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a/8e8136bd20adda1f521c21ef405baddfcf2023\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5a/8e8136bd20adda1f521c21ef405baddfcf2023: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6d/10d6a6e84ae2279363c55ad3a63a4063d14e5c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6d/10d6a6e84ae2279363c55ad3a63a4063d14e5c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6d/70c28d158a258757c0ef0d27c79b21d6c2210a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6d/70c28d158a258757c0ef0d27c79b21d6c2210a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b5\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b5: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b5/a0e5b157378a621d7f37368e5f893372711b10\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b5/a0e5b157378a621d7f37368e5f893372711b10: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7c/95194adad54ca6792c90e26093918a91ba2424\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7c/95194adad54ca6792c90e26093918a91ba2424: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e9/f3b267dd13d8de7109589b9c02651a18a9535c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e9/f3b267dd13d8de7109589b9c02651a18a9535c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/10\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/10: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/10/ec08deb5b4967055a44fca5422b286ce8020e1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/10/ec08deb5b4967055a44fca5422b286ce8020e1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b8/9c488650845c00ffc32c7c079165acc04d623c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b8/9c488650845c00ffc32c7c079165acc04d623c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/58\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/58: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/58/f15363c0b49dd71b1ab867f4eba1a4aead4b68\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/58/f15363c0b49dd71b1ab867f4eba1a4aead4b68: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/58/f15363c0b49dd71b1ab867f4eba1a4aead4b68.1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/58/f15363c0b49dd71b1ab867f4eba1a4aead4b68.1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/99\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/99: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/99/3a4f481887e9e41d0bd488d66103e4b6a20563\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/99/3a4f481887e9e41d0bd488d66103e4b6a20563: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e3/b989f8fbfed34ea2f3561debf5bcc14d670e63\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e3/b989f8fbfed34ea2f3561debf5bcc14d670e63: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/48\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/48: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/48/702f2a14c21df27c4716697129fc7cb163161b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/48/702f2a14c21df27c4716697129fc7cb163161b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/48/a908673b2632815a160ffa3c00bb49d8e11867\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/48/a908673b2632815a160ffa3c00bb49d8e11867: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9b/4fe579fe4870793a52a315948b4ef2ba2f15f2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9b/4fe579fe4870793a52a315948b4ef2ba2f15f2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9b/fe211f59b3a00a0be260cbccc7e556b5ebe659\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9b/fe211f59b3a00a0be260cbccc7e556b5ebe659: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1d/554997639785e70765aca467f08d7cd76c48da\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1d/554997639785e70765aca467f08d7cd76c48da: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2f/8c705d070c393e967900e865fab6c390af9595\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2f/8c705d070c393e967900e865fab6c390af9595: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/07\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/07: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/07/e0ad65914c80af84f7ba0a3f04094e02c27eac\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/07/e0ad65914c80af84f7ba0a3f04094e02c27eac: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/be\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/be: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/be/46282a4ab9f82e481b73b2028c2e0d83f61c01\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/be/46282a4ab9f82e481b73b2028c2e0d83f61c01: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1e/871a3d460bdbf5b9e652b66d954d6b41586adb\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1e/871a3d460bdbf5b9e652b66d954d6b41586adb: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1e/12610a0a6e2cee03e5c5ef324a795706ca4af8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1e/12610a0a6e2cee03e5c5ef324a795706ca4af8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/24\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/24: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/24/62560acbc4db0ed93e1556faca90d4ee288dfa\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/24/62560acbc4db0ed93e1556faca90d4ee288dfa: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/24/b36014c5c6e0de63afdcbcc33de893839534c1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/24/b36014c5c6e0de63afdcbcc33de893839534c1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/37\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/37: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/37/bd1e0df315849b164694a68817d1e4a6a23393\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/37/bd1e0df315849b164694a68817d1e4a6a23393: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f9/1f8ca0db0a78ec1cbb5fbea29c2c840a4d604f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f9/1f8ca0db0a78ec1cbb5fbea29c2c840a4d604f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/77\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/77: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/77/eb94029352d9bad8d41379eec908a9f8ffe7a3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/77/eb94029352d9bad8d41379eec908a9f8ffe7a3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/77/08e397769139126ec910a9d05fc6b1ca26da0a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/77/08e397769139126ec910a9d05fc6b1ca26da0a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/88\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/88: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/88/62148e74a680ce25d4223a3875ccc5ec4ce17e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/88/62148e74a680ce25d4223a3875ccc5ec4ce17e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bd/15706c05deff94d631801f9704d7ffc5e0e076\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bd/15706c05deff94d631801f9704d7ffc5e0e076: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1b/e9ceb22be877d4e469efa50445f07c8c1f83f8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1b/e9ceb22be877d4e469efa50445f07c8c1f83f8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/98\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/98: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/98/89e3ca48c9b5f9ad98d1a2569d23b36ccb6ab8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/98/89e3ca48c9b5f9ad98d1a2569d23b36ccb6ab8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9f/c1490e62b94b1b46bffc368f1f51848743093e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9f/c1490e62b94b1b46bffc368f1f51848743093e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9f/f45c232391ca02ee245fd9e7eb80696dd0c122\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9f/f45c232391ca02ee245fd9e7eb80696dd0c122: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/38\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/38: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/38/8775e816e130a4641577668abf61635a1d3735\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/38/8775e816e130a4641577668abf61635a1d3735: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/86\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/86: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/86/8158bb58f3941c3932a76c19dea8401d0d881c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/86/8158bb58f3941c3932a76c19dea8401d0d881c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/86/ec315c0b77d7f9f34cfac0e53a7e8fecad2d57\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/86/ec315c0b77d7f9f34cfac0e53a7e8fecad2d57: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b3/95fd9884705049862fa42e4591fd68abd9b40a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b3/95fd9884705049862fa42e4591fd68abd9b40a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b3/d4a4a3e9b1870210d215da2bd2b7830b778381\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b3/d4a4a3e9b1870210d215da2bd2b7830b778381: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f0/bffb8d4078741cd77e8d78ffb7d3395c5bc845\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f0/bffb8d4078741cd77e8d78ffb7d3395c5bc845: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3a/65fe582c01b8683b5d078878811aaa8464138b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3a/65fe582c01b8683b5d078878811aaa8464138b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3c/b67309a29398e238aec66d4de9f0f23235cd91\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3c/b67309a29398e238aec66d4de9f0f23235cd91: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/73bb368712427a628e1ca99543f4add99e3cf0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/73bb368712427a628e1ca99543f4add99e3cf0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/d797bbdc5dbcf1ca7ee8d40c176ff1f3ffe0c8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/d797bbdc5dbcf1ca7ee8d40c176ff1f3ffe0c8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/f41df6594d8b065ccb75651070a67013eeba46\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/f41df6594d8b065ccb75651070a67013eeba46: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/0950e9365e0b8aa3bd7209ac6ea2c1696d124d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/0950e9365e0b8aa3bd7209ac6ea2c1696d124d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/3796d2364876e0b3af34a9ca234eb089565b30\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/3796d2364876e0b3af34a9ca234eb089565b30: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/4330739c6cfd738d9c374c510dc515f0b78701\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6c/4330739c6cfd738d9c374c510dc515f0b78701: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/73\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/73: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/73/f00ef7111ba2cc8a69497b41beba35ddf06559\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/73/f00ef7111ba2cc8a69497b41beba35ddf06559: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/73/f0ad13c2ab751bce3cdd92706f30e9c5ebafad\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/73/f0ad13c2ab751bce3cdd92706f30e9c5ebafad: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ad\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ad: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ad/673b42528afae3dc68f5d5219f043a79abca02\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ad/673b42528afae3dc68f5d5219f043a79abca02: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/af\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/af: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/af/894db221956e39b5978a4ce466652fd0972337\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/af/894db221956e39b5978a4ce466652fd0972337: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d9/519f4dc1369aa97ab76157a95baaf1ac4c530b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d9/519f4dc1369aa97ab76157a95baaf1ac4c530b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fc/7869b92773180b32b3204de10557c09b629e05\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fc/7869b92773180b32b3204de10557c09b629e05: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fc/67454d54493ef8831fb8b35eb0851879e1caae\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fc/67454d54493ef8831fb8b35eb0851879e1caae: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/26\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/26: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/26/b00e9ec1bf99e19e512dd3454e7cfb17150d11\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/26/b00e9ec1bf99e19e512dd3454e7cfb17150d11: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/64\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/64: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/64/b57799da4528e656def0ff3dfc50b219891291\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/64/b57799da4528e656def0ff3dfc50b219891291: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a2/b692343d2750000f671c4b7358b3dd3d3314e8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a2/b692343d2750000f671c4b7358b3dd3d3314e8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a2/23f23002f64e7fbdc460b927be82bf93da2995\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a2/23f23002f64e7fbdc460b927be82bf93da2995: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ba\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ba: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ba/e16be2abdc0a78548e83dfa5389859eba58b10\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ba/e16be2abdc0a78548e83dfa5389859eba58b10: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bf/e95f8fd60bcb4a480d848ba2720e6acdb35903\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bf/e95f8fd60bcb4a480d848ba2720e6acdb35903: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1a/3e845de17389efa0d2e4a4c24254ba724bd22a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1a/3e845de17389efa0d2e4a4c24254ba724bd22a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d4/d33e6813da9853d3d83ea82d99825da0512185\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d4/d33e6813da9853d3d83ea82d99825da0512185: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/de\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/de: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/de/e3b742fb69d55b23d5efe7a2a7d92d8a8c513c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/de/e3b742fb69d55b23d5efe7a2a7d92d8a8c513c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/61\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/61: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/61/661bfa2cab22b87d022bd131c2219e47cb9de9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/61/661bfa2cab22b87d022bd131c2219e47cb9de9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/97\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/97: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/97/342aae192903738d26584f7696fdafdd4c0fe8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/97/342aae192903738d26584f7696fdafdd4c0fe8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9a/24488ee7cfbcaba9e9d761deb8fab654f9ed5f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9a/24488ee7cfbcaba9e9d761deb8fab654f9ed5f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9a/5ed14cb9cec518130b09d0dbd0babb92715773\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9a/5ed14cb9cec518130b09d0dbd0babb92715773: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c/1dc2bf18b1193c89a2a0cad01da6821aa50675\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c/1dc2bf18b1193c89a2a0cad01da6821aa50675: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c/58fe282871b83f3f49a0a7ec4f51f0f2b58318\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c/58fe282871b83f3f49a0a7ec4f51f0f2b58318: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c/9093df2183bf3294ddc51302b17aa0a6e342a4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9c/9093df2183bf3294ddc51302b17aa0a6e342a4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0f/ff831b766a546cb5f767c4af9b4dc136502867\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0f/ff831b766a546cb5f767c4af9b4dc136502867: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/11\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/11: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/11/81a7a886960671578c62f6b4a3507a289416ee\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/11/81a7a886960671578c62f6b4a3507a289416ee: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/53\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/53: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/53/c43ba27bb0d22fc5833c1808446f07e5d9e086\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/53/c43ba27bb0d22fc5833c1808446f07e5d9e086: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/84\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/84: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/84/4d4d3bf1964d3746e640adf8a0e37ce014c4b5\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/84/4d4d3bf1964d3746e640adf8a0e37ce014c4b5: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8b/42721f76cd754eb8f32628f227983835dcb222\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8b/42721f76cd754eb8f32628f227983835dcb222: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7e/6995ce97fef81f5415ca55eb0b712ae0ee7523\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7e/6995ce97fef81f5415ca55eb0b712ae0ee7523: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7e/850425a437dbec491bbc6d689fa3979534bf85\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/7e/850425a437dbec491bbc6d689fa3979534bf85: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fb\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fb: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fb/da7a1ca0d4394e5bf2fbdbfdb3553515d5e8d8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fb/da7a1ca0d4394e5bf2fbdbfdb3553515d5e8d8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6e/1c3cab5d94f3cd5d4166b6404861200e992b5e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6e/1c3cab5d94f3cd5d4166b6404861200e992b5e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6e/d8ff701ed7a87a370bb4d38936cb4993d27b33\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/6e/d8ff701ed7a87a370bb4d38936cb4993d27b33: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/81\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/81: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/81/040de175ba8cd678646ebac44fa7c5d9705edb\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/81/040de175ba8cd678646ebac44fa7c5d9705edb: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/96\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/96: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/96/0023ceeab60566c12fff0c7b88e5bc92a05156\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/96/0023ceeab60566c12fff0c7b88e5bc92a05156: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c4/10bc1a4b973c150e7971413ec03708b41b9f2f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c4/10bc1a4b973c150e7971413ec03708b41b9f2f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ca\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ca: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ca/e982e5ef85af3a29629c68db468dbba507489b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ca/e982e5ef85af3a29629c68db468dbba507489b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3e/d36716e9b4198a54adb547acd1593b840afde4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/3e/d36716e9b4198a54adb547acd1593b840afde4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/66\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/66: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/66/aa0a18263c4a29d3b702e956546a16250a9258\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/66/aa0a18263c4a29d3b702e956546a16250a9258: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/90\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/90: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/90/08eb9b2ca67a83ad17fa9aaecebf19cddef3ad\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/90/08eb9b2ca67a83ad17fa9aaecebf19cddef3ad: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c0/671f3a73b49ece2f4706012981382cf00c6cc0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c0/671f3a73b49ece2f4706012981382cf00c6cc0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c0/a662334a9e43ca3cfde299f4e78a7ef3ce0bcc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c0/a662334a9e43ca3cfde299f4e78a7ef3ce0bcc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4b/bc5d4b63f30341e6035ae54a20ab7a18e8e0d8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4b/bc5d4b63f30341e6035ae54a20ab7a18e8e0d8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4b/3fb60aeb8470105990c515a34fc8abe0291182\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4b/3fb60aeb8470105990c515a34fc8abe0291182: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d2/6dedc3e6b0bcd5852e10617fbe64cac139e244\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d2/6dedc3e6b0bcd5852e10617fbe64cac139e244: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d2/c420317f7745d79d7c16d80573f057e8c982a1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d2/c420317f7745d79d7c16d80573f057e8c982a1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/12\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/12: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/12/ad33ab2736109dd8f6f14d911cdd8b0db46807\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/12/ad33ab2736109dd8f6f14d911cdd8b0db46807: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4c/0acfcee4bd45d6e9feb1d69242ca919dd3ff55\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4c/0acfcee4bd45d6e9feb1d69242ca919dd3ff55: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8c/37ae4650778b6e29873532f105f97cd9fbc21a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8c/37ae4650778b6e29873532f105f97cd9fbc21a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8c/9ba98bdda1d62ac9eeaaad05b73bda17f6ff7f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8c/9ba98bdda1d62ac9eeaaad05b73bda17f6ff7f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a3/cf7e95bd7d3184d470665fe46802582d0633c3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a3/cf7e95bd7d3184d470665fe46802582d0633c3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0e0d7841fe061d3bd6e365019fbc2c34ef5005\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0e0d7841fe061d3bd6e365019fbc2c34ef5005: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0fa7f53cab2b23f22eff1131d56bbeb327f67b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0fa7f53cab2b23f22eff1131d56bbeb327f67b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0fa7f53cab2b23f22eff1131d56bbeb327f67b.1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0fa7f53cab2b23f22eff1131d56bbeb327f67b.1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0fa7f53cab2b23f22eff1131d56bbeb327f67b.2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c1/0fa7f53cab2b23f22eff1131d56bbeb327f67b.2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c6\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c6: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c6/c28655085e12b3c2543458283353c715016cbc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c6/c28655085e12b3c2543458283353c715016cbc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32/95bc757ef096512cda161ed4e8678725e511cf\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32/95bc757ef096512cda161ed4e8678725e511cf: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32/dbce3b779f897ea39113f1350e7d0c6c24b4f9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32/dbce3b779f897ea39113f1350e7d0c6c24b4f9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32/fbcb2a53577122e08e6d43c0ed9e10d54859c1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/32/fbcb2a53577122e08e6d43c0ed9e10d54859c1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5f/8a9d207fc2fad2166c8fe36442ae994efeac96\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5f/8a9d207fc2fad2166c8fe36442ae994efeac96: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5f/f38b5d6ddb25f815b84bf7f1d8343a68f791d9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5f/f38b5d6ddb25f815b84bf7f1d8343a68f791d9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/68\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/68: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/68/a248b415244165868407bc67e9347098fc3ac4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/68/a248b415244165868407bc67e9347098fc3ac4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/57\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/57: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/57/744f0680a7727b40bfccc37785b46bdb96c396\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/57/744f0680a7727b40bfccc37785b46bdb96c396: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/57/a4bf13991f53a7ced9665f8bfeb0ce914cf4cd\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/57/a4bf13991f53a7ced9665f8bfeb0ce914cf4cd: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/65\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/65: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/65/187cf35e5a3ef5fb454dd10a2e893d1d499c0a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/65/187cf35e5a3ef5fb454dd10a2e893d1d499c0a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5c/4706309dfa01c0a55b0b039f2f5d9403696994\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5c/4706309dfa01c0a55b0b039f2f5d9403696994: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/80\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/80: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/80/edb5b351a77aa371fd7f7e497e0126fd9b8803\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/80/edb5b351a77aa371fd7f7e497e0126fd9b8803: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/80/f9e01b60cf69e6aa348faea41d6532d699a6b9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/80/f9e01b60cf69e6aa348faea41d6532d699a6b9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8f/84882d82f114554389afaec2a9ee86358c1e1a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/8f/84882d82f114554389afaec2a9ee86358c1e1a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d7\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d7: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d7/8b04f97f68129dc25e695a26304c781047120b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d7/8b04f97f68129dc25e695a26304c781047120b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/09\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/09: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/09/6301f1862f7555350f75cf7f2bc12da085e249\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/09/6301f1862f7555350f75cf7f2bc12da085e249: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/09/b1b2e7e88a555d6e90321894f695da219ae39b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/09/b1b2e7e88a555d6e90321894f695da219ae39b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/23\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/23: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/23/aaeb6d2bccffd3cac7053fb2bae9993b673917\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/23/aaeb6d2bccffd3cac7053fb2bae9993b673917: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e7\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e7: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e7/e3c451531a69b8f93b0b5f20f331be08f6cad9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e7/e3c451531a69b8f93b0b5f20f331be08f6cad9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/47\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/47: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/47/32472a0dbd0dd3c6943f6937aee28a4709ec19\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/47/32472a0dbd0dd3c6943f6937aee28a4709ec19: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/47/feeec6b3e5dcd790f9f4342823d009f215ad9b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/47/feeec6b3e5dcd790f9f4342823d009f215ad9b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d8/e2dc267b9f981b04ee4f32ffcf153b80372a97\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d8/e2dc267b9f981b04ee4f32ffcf153b80372a97: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fe\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fe: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fe/8101e106f28f4d79f1f361a6c6990658949ad6\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fe/8101e106f28f4d79f1f361a6c6990658949ad6: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fe/ff62c721e16e69c94d2e5f29694bb8f57b4a05\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/fe/ff62c721e16e69c94d2e5f29694bb8f57b4a05: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/69\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/69: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/69/e91fc180ba574f8c8da8dc0841c02685f9e158\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/69/e91fc180ba574f8c8da8dc0841c02685f9e158: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b1/977f3727b618ea409dea372288d59eba56e258\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b1/977f3727b618ea409dea372288d59eba56e258: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/56\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/56: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/56/2dcf2b3817b09bd6cb6911f37ade7a2ec90132\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/56/2dcf2b3817b09bd6cb6911f37ade7a2ec90132: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/95\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/95: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/95/6559bbeba586774f90a69a1aca070915985c82\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/95/6559bbeba586774f90a69a1aca070915985c82: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c9/1502b7e9364d18f330c18b67f19f11bb587251\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c9/1502b7e9364d18f330c18b67f19f11bb587251: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c9/c07f8c51195798688d2932fd5113d33b1258e8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/c9/c07f8c51195798688d2932fd5113d33b1258e8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d0\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d0: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d0/f907b7f5942ac20599454a4c98017bd14f274f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d0/f907b7f5942ac20599454a4c98017bd14f274f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d0/d0ead630b525942d6a5f341fe7eb192c40d038\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/d0/d0ead630b525942d6a5f341fe7eb192c40d038: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/35\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/35: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/35/addfd140638b8f14d0ff1cfb6a50818526db4a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/35/addfd140638b8f14d0ff1cfb6a50818526db4a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4e/aec58e151fb249ecad2e4b6ea0ad1279af660c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/4e/aec58e151fb249ecad2e4b6ea0ad1279af660c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bc/a2824b9a542c8c1a2d24166dacd6c41def099f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/bc/a2824b9a542c8c1a2d24166dacd6c41def099f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1c/2f9da1a336a9dbd11c70155b30eeeb9cf1b724\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1c/2f9da1a336a9dbd11c70155b30eeeb9cf1b724: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1f/75d5ec22384f7646e9fdce8dd744e48febf8fc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/1f/75d5ec22384f7646e9fdce8dd744e48febf8fc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a1/cea1c489c70202432c596418d52da652a61f4e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a1/cea1c489c70202432c596418d52da652a61f4e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a4/e91a47cc70b817f69cf680cdd9d26ade32efab\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a4/e91a47cc70b817f69cf680cdd9d26ade32efab: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b4/d1180eaba463aaf6ec16f4b191543da8d10ca2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/b4/d1180eaba463aaf6ec16f4b191543da8d10ca2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e8/8aef7816752c4fdced4eccae3c831d0e9378b3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e8/8aef7816752c4fdced4eccae3c831d0e9378b3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e8/a8599321574c8b2218cf2667e63a079ce64931\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e8/a8599321574c8b2218cf2667e63a079ce64931: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/02\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/02: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/02/0461369d40182bd8b73f5ff0ddf320458847be\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/02/0461369d40182bd8b73f5ff0ddf320458847be: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/02/c63bf4f4130719bd9eac8aa92124b1b9121e8f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/02/c63bf4f4130719bd9eac8aa92124b1b9121e8f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36/642c280dda6fc23b017bcf4d96b02d4aa5fa92\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36/642c280dda6fc23b017bcf4d96b02d4aa5fa92: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36/c49b0042bbea3205d9fb813fbc0c6fad94018f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36/c49b0042bbea3205d9fb813fbc0c6fad94018f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36/38367336c106aa5b6af44fa828d16d78360169\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/36/38367336c106aa5b6af44fa828d16d78360169: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a7\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a7: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a7/d2df67bcd9444ce952d07516131850e28835e6\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a7/d2df67bcd9444ce952d07516131850e28835e6: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/42\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/42: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/42/ec0566d7e0217b38d69a7cae503494b85b5808\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/42/ec0566d7e0217b38d69a7cae503494b85b5808: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0a/3c8c8992af0c26b2902bc0d70d1915c91bdcb3\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0a/3c8c8992af0c26b2902bc0d70d1915c91bdcb3: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0a/6c0f71bb77bd4f4505b7e636d26fdf4f536043\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/0a/6c0f71bb77bd4f4505b7e636d26fdf4f536043: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/82\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/82: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/82/7c6e053590fe437d616e6ba57b2a340302f016\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/82/7c6e053590fe437d616e6ba57b2a340302f016: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f2/2bc7fbcd4127bb47a721083d01adf2b21c4a72\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/f2/2bc7fbcd4127bb47a721083d01adf2b21c4a72: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ff\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ff: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ff/b19d10eed0b0c2a2fda29318808478bb6757f4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/ff/b19d10eed0b0c2a2fda29318808478bb6757f4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/22\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/22: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/22/040dced525bac7bde5dd2da9e9161d69b1fd76\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/22/040dced525bac7bde5dd2da9e9161d69b1fd76: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/40\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/40: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/40/27903672cb134dc5d2a1279d12ca116dfc9c83\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/40/27903672cb134dc5d2a1279d12ca116dfc9c83: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9e\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9e: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9e/72ca1dc45d7726c81d250781036c4864f02d51\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/9e/72ca1dc45d7726c81d250781036c4864f02d51: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/dc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/dc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/dc/fec00d6e13bce7bfb28a1833b98791754cff0a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/dc/fec00d6e13bce7bfb28a1833b98791754cff0a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/51\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/51: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/51/cb826df3ffe1760a506cbbeb5c90c941c5bba8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/51/cb826df3ffe1760a506cbbeb5c90c941c5bba8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5d/6147fb4d3529e231d2fc7bec8dca9f6a95246a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/5d/6147fb4d3529e231d2fc7bec8dca9f6a95246a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/cc\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/cc: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/cc/6c613cb2a391c3402bbc2b4ee7b9e386b3745f\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/cc/6c613cb2a391c3402bbc2b4ee7b9e386b3745f: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/30\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/30: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/30/27da82f167b2e96c8c188a6f775b57dd9ae9ee\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/30/27da82f167b2e96c8c188a6f775b57dd9ae9ee: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/21\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/21: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/21/1690f1a4a95e671b438eba9e6dba8aaca55536\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/21/1690f1a4a95e671b438eba9e6dba8aaca55536: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/21/3fdb2cbb13aab0c1e236e869657a89575733b1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/21/3fdb2cbb13aab0c1e236e869657a89575733b1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/27\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/27: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/27/307a54d7ef18d9286d50dac10aa6fa64653d4b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/27/307a54d7ef18d9286d50dac10aa6fa64653d4b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/27/4ae8fd908163b4a74be9b8aed1e4c25860b5eb\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/27/4ae8fd908163b4a74be9b8aed1e4c25860b5eb: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/70\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/70: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/70/33eec8708bc486817e6ae72f89a0e9e2ec7b9c\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/70/33eec8708bc486817e6ae72f89a0e9e2ec7b9c: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2a\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2a: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2a/317a879ffe04800b781007d71f3037737f275b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2a/317a879ffe04800b781007d71f3037737f275b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2a/5ec56f7af55bac66dbb24832633ae3e8b1443b\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/2a/5ec56f7af55bac66dbb24832633ae3e8b1443b: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/91\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/91: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/91/69e16e2da3ec26201bb11ef965947c2f0b7712\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/91/69e16e2da3ec26201bb11ef965947c2f0b7712: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/91/849d07bc5e74f22b550856dcd3adb1147b3349\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/91/849d07bc5e74f22b550856dcd3adb1147b3349: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/01\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/01: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/01/14d70635a6ba8e88d974d07815c518ecb28da9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/01/14d70635a6ba8e88d974d07815c518ecb28da9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a5\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a5: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a5/b5f7e802d47f332aaaaf50636f42aac9ed0e74\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/a5/b5f7e802d47f332aaaaf50636f42aac9ed0e74: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/db\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/db: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/db/1db4ad8871c87337ab8ca52a9a7aab6c27c9ec\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/db/1db4ad8871c87337ab8ca52a9a7aab6c27c9ec: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e2\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e2: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e2/6ff8bc813fa3da4ad75ced252e1b8d43d3a581\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e2/6ff8bc813fa3da4ad75ced252e1b8d43d3a581: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/50\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/50: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/50/aa2d047e7740e42486ea1f6f08d812b5ab59c5\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/50/aa2d047e7740e42486ea1f6f08d812b5ab59c5: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/50/ee4e16ae62e0615cfdab55b9a5ba0ecc02bea9\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/50/ee4e16ae62e0615cfdab55b9a5ba0ecc02bea9: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/df\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/df: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/df/f2189d4c2dbd71b897709832a57ebbd763f7a4\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/df/f2189d4c2dbd71b897709832a57ebbd763f7a4: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/13\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/13: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/13/de9022c0d7eb81bd7e57ddf877f4d773d93155\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/13/de9022c0d7eb81bd7e57ddf877f4d773d93155: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/13/932b9afac88ef1a87e38f5b0988257aea390f1\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/13/932b9afac88ef1a87e38f5b0988257aea390f1: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/29\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/29: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/29/ffdb42c588b438cceb6647f8000016711e07ec\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/29/ffdb42c588b438cceb6647f8000016711e07ec: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e6\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e6: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e6/fc03677d8f4ef7a68cdfe2273b801c1df19c6d\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/.build-id/e6/fc03677d8f4ef7a68cdfe2273b801c1df19c6d: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_IE/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_ZA/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_BW.utf8/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_HK.utf8/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_SG.utf8/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AG/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_AU/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MESSAGES/SYS_LC_MESSAGES\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MESSAGES/SYS_LC_MESSAGES: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MONETARY\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_MONETARY: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_ADDRESS: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_TIME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_US/LC_TIME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_PAPER\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_PAPER: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_TELEPHONE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_TELEPHONE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_COLLATE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_COLLATE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_CTYPE\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_CTYPE: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_IDENTIFICATION\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_IDENTIFICATION: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_MEASUREMENT\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_MEASUREMENT: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_NAME\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_NAME: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_NUMERIC\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_NUMERIC: no such file or directory" Nov 28 04:37:54 localhost podman[239012]: time="2025-11-28T09:37:54Z" level=error msg="Can not stat \"/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_ADDRESS\": lstat /var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/merged/usr/lib/locale/en_GB.iso885915/LC_ADDRESS: no such file or directory" Nov 28 04:40:40 localhost python3.9[253867]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:40:40 localhost rsyslogd[758]: imjournal: 1699 messages lost due to rate-limiting (20000 allowed within 600 seconds) Nov 28 04:40:40 localhost python3.9[253922]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:40:41 localhost python3.9[254031]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322840.9876287-1000-174850182987501/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:40:42 localhost python3.9[254086]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:40:42 localhost systemd[1]: Reloading. Nov 28 04:40:42 localhost systemd-rc-local-generator[254112]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:40:42 localhost systemd-sysv-generator[254117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:42 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost python3.9[254178]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:40:43 localhost systemd[1]: Reloading. Nov 28 04:40:43 localhost systemd-sysv-generator[254205]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:40:43 localhost systemd-rc-local-generator[254202]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:40:43 localhost systemd[1]: Starting neutron_sriov_agent container... Nov 28 04:40:43 localhost systemd[1]: Started libcrun container. Nov 28 04:40:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a087f5889c3563491fc0fd2134c28d5a04378c317280587469fbfdbb4f54b43/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 28 04:40:43 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a087f5889c3563491fc0fd2134c28d5a04378c317280587469fbfdbb4f54b43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:40:43 localhost podman[254219]: 2025-11-28 09:40:43.692419965 +0000 UTC m=+0.105442903 container init 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_sriov_agent) Nov 28 04:40:43 localhost podman[254219]: 2025-11-28 09:40:43.700739373 +0000 UTC m=+0.113762311 container start 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 28 04:40:43 localhost podman[254219]: neutron_sriov_agent Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + sudo -E kolla_set_configs Nov 28 04:40:43 localhost systemd[1]: Started neutron_sriov_agent container. Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Validating config file Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Copying service configuration files Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Writing out command to execute Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: ++ cat /run_command Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + CMD=/usr/bin/neutron-sriov-nic-agent Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + ARGS= Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + sudo kolla_copy_cacerts Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + [[ ! -n '' ]] Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + . kolla_extend_start Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: Running command: '/usr/bin/neutron-sriov-nic-agent' Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + umask 0022 Nov 28 04:40:43 localhost neutron_sriov_agent[254234]: + exec /usr/bin/neutron-sriov-nic-agent Nov 28 04:40:44 localhost python3.9[254358]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:40:44 localhost systemd[1]: Stopping neutron_sriov_agent container... Nov 28 04:40:44 localhost systemd[1]: tmp-crun.AsCkFe.mount: Deactivated successfully. Nov 28 04:40:44 localhost systemd[1]: libpod-679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c.scope: Deactivated successfully. Nov 28 04:40:44 localhost systemd[1]: libpod-679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c.scope: Consumed 1.217s CPU time. Nov 28 04:40:44 localhost podman[254362]: 2025-11-28 09:40:44.93014404 +0000 UTC m=+0.098597081 container died 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent) Nov 28 04:40:45 localhost podman[254362]: 2025-11-28 09:40:45.023543148 +0000 UTC m=+0.191996119 container cleanup 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.schema-version=1.0) Nov 28 04:40:45 localhost podman[254362]: neutron_sriov_agent Nov 28 04:40:45 localhost podman[254376]: 2025-11-28 09:40:45.026549561 +0000 UTC m=+0.095977919 container cleanup 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=neutron_sriov_agent, org.label-schema.vendor=CentOS) Nov 28 04:40:45 localhost podman[254387]: 2025-11-28 09:40:45.107316237 +0000 UTC m=+0.055817443 container cleanup 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2) Nov 28 04:40:45 localhost podman[254387]: neutron_sriov_agent Nov 28 04:40:45 localhost systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully. Nov 28 04:40:45 localhost systemd[1]: Stopped neutron_sriov_agent container. Nov 28 04:40:45 localhost systemd[1]: Starting neutron_sriov_agent container... Nov 28 04:40:45 localhost systemd[1]: Started libcrun container. Nov 28 04:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a087f5889c3563491fc0fd2134c28d5a04378c317280587469fbfdbb4f54b43/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 28 04:40:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a087f5889c3563491fc0fd2134c28d5a04378c317280587469fbfdbb4f54b43/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:40:45 localhost podman[254400]: 2025-11-28 09:40:45.253633697 +0000 UTC m=+0.117499907 container init 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, config_id=neutron_sriov_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=neutron_sriov_agent) Nov 28 04:40:45 localhost podman[254400]: 2025-11-28 09:40:45.261559673 +0000 UTC m=+0.125425883 container start 679926e9f9d19f972216ef7c650e8482dfdd25fbca9daa0ab447b1127efd4b9c (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'b3fa7b09cb2ae0d1c8c04f9c87a2894b2bcc39a986fca3511978ef5621fe5639'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent) Nov 28 04:40:45 localhost podman[254400]: neutron_sriov_agent Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + sudo -E kolla_set_configs Nov 28 04:40:45 localhost systemd[1]: Started neutron_sriov_agent container. Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Validating config file Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Copying service configuration files Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Writing out command to execute Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/a99cc267a5c3ade03c88b3bb0a43299c9bb62825df6f4ca0c30c03cccfac55c1 Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: ++ cat /run_command Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + CMD=/usr/bin/neutron-sriov-nic-agent Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + ARGS= Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + sudo kolla_copy_cacerts Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + [[ ! -n '' ]] Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + . kolla_extend_start Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: Running command: '/usr/bin/neutron-sriov-nic-agent' Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\''' Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + umask 0022 Nov 28 04:40:45 localhost neutron_sriov_agent[254415]: + exec /usr/bin/neutron-sriov-nic-agent Nov 28 04:40:45 localhost systemd[1]: session-57.scope: Deactivated successfully. Nov 28 04:40:45 localhost systemd[1]: session-57.scope: Consumed 22.770s CPU time. Nov 28 04:40:45 localhost systemd-logind[763]: Session 57 logged out. Waiting for processes to exit. Nov 28 04:40:45 localhost systemd-logind[763]: Removed session 57. Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.873 2 INFO neutron.common.config [-] Logging enabled!#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.873 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.874 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.874 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.874 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.874 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.874 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005538515.localdomain'}#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.874 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f57fe1c5-075a-474f-89f1-63a8be551758 - - - - - -] RPC agent_id: nic-switch-agent.np0005538515.localdomain#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.879 2 INFO neutron.agent.agent_extensions_manager [None req-f57fe1c5-075a-474f-89f1-63a8be551758 - - - - - -] Loaded agent extensions: ['qos']#033[00m Nov 28 04:40:46 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:46.879 2 INFO neutron.agent.agent_extensions_manager [None req-f57fe1c5-075a-474f-89f1-63a8be551758 - - - - - -] Initializing agent extension 'qos'#033[00m Nov 28 04:40:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52368 DF PROTO=TCP SPT=54008 DPT=9102 SEQ=2498611152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD99CFB0000000001030307) Nov 28 04:40:47 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:47.277 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f57fe1c5-075a-474f-89f1-63a8be551758 - - - - - -] Agent initialized successfully, now running... #033[00m Nov 28 04:40:47 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:47.277 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f57fe1c5-075a-474f-89f1-63a8be551758 - - - - - -] SRIOV NIC Agent RPC Daemon Started!#033[00m Nov 28 04:40:47 localhost neutron_sriov_agent[254415]: 2025-11-28 09:40:47.278 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-f57fe1c5-075a-474f-89f1-63a8be551758 - - - - - -] Agent out of sync with plugin!#033[00m Nov 28 04:40:48 localhost nova_compute[228497]: 2025-11-28 09:40:48.172 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:40:48 localhost podman[254448]: 2025-11-28 09:40:48.980325331 +0000 UTC m=+0.091950602 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal) Nov 28 04:40:48 localhost podman[254448]: 2025-11-28 09:40:48.996479333 +0000 UTC m=+0.108104594 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=edpm, name=ubi9-minimal) Nov 28 04:40:49 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:40:49 localhost nova_compute[228497]: 2025-11-28 09:40:49.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:40:50.821 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:40:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:40:50.821 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:40:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:40:50.821 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.070 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.094 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.095 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.095 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.095 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.096 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:40:52 localhost sshd[254488]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:40:52 localhost systemd-logind[763]: New session 58 of user zuul. Nov 28 04:40:52 localhost systemd[1]: Started Session 58 of User zuul. Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.524 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.751 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.754 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12962MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.754 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.755 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.833 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.834 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:40:52 localhost nova_compute[228497]: 2025-11-28 09:40:52.863 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:40:53 localhost nova_compute[228497]: 2025-11-28 09:40:53.336 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:40:53 localhost nova_compute[228497]: 2025-11-28 09:40:53.342 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:40:53 localhost python3.9[254621]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:40:53 localhost nova_compute[228497]: 2025-11-28 09:40:53.359 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:40:53 localhost nova_compute[228497]: 2025-11-28 09:40:53.361 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:40:53 localhost nova_compute[228497]: 2025-11-28 09:40:53.361 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.362 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.363 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.363 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.382 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.382 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.383 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.383 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.384 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:54 localhost nova_compute[228497]: 2025-11-28 09:40:54.384 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:40:54 localhost python3.9[254737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:40:55 localhost nova_compute[228497]: 2025-11-28 09:40:55.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:40:55 localhost python3.9[254800]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:40:57 localhost openstack_network_exporter[240973]: ERROR 09:40:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:40:57 localhost openstack_network_exporter[240973]: ERROR 09:40:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:40:57 localhost openstack_network_exporter[240973]: ERROR 09:40:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:40:57 localhost openstack_network_exporter[240973]: ERROR 09:40:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:40:57 localhost openstack_network_exporter[240973]: Nov 28 04:40:57 localhost openstack_network_exporter[240973]: ERROR 09:40:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:40:57 localhost openstack_network_exporter[240973]: Nov 28 04:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:40:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:40:57 localhost podman[254803]: 2025-11-28 09:40:57.98676701 +0000 UTC m=+0.085914226 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:40:57 localhost podman[254803]: 2025-11-28 09:40:57.995364036 +0000 UTC m=+0.094511302 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 04:40:58 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:40:58 localhost podman[254806]: 2025-11-28 09:40:58.089324102 +0000 UTC m=+0.184961750 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:40:58 localhost podman[254806]: 2025-11-28 09:40:58.097363512 +0000 UTC m=+0.193001120 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:40:58 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:40:58 localhost podman[254804]: 2025-11-28 09:40:58.20009915 +0000 UTC m=+0.300354151 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:40:58 localhost podman[254805]: 2025-11-28 09:40:58.227538541 +0000 UTC m=+0.328963718 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent) Nov 28 04:40:58 localhost podman[254804]: 2025-11-28 09:40:58.238724118 +0000 UTC m=+0.338979149 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3) Nov 28 04:40:58 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:40:58 localhost podman[254805]: 2025-11-28 09:40:58.258881854 +0000 UTC m=+0.360307011 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:40:58 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:40:58 localhost podman[239012]: time="2025-11-28T09:40:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:40:58 localhost podman[239012]: @ - - [28/Nov/2025:09:40:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144036 "" "Go-http-client/1.1" Nov 28 04:40:58 localhost podman[239012]: @ - - [28/Nov/2025:09:40:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16299 "" "Go-http-client/1.1" Nov 28 04:40:58 localhost systemd[1]: tmp-crun.p2VTl6.mount: Deactivated successfully. Nov 28 04:41:00 localhost python3.9[254995]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None Nov 28 04:41:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:41:00 localhost systemd[1]: tmp-crun.EGcRhz.mount: Deactivated successfully. Nov 28 04:41:00 localhost podman[255109]: 2025-11-28 09:41:00.820494018 +0000 UTC m=+0.055544105 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:41:00 localhost podman[255109]: 2025-11-28 09:41:00.854489762 +0000 UTC m=+0.089539909 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:41:00 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:41:00 localhost python3.9[255108]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:01 localhost python3.9[255241]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61060 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD9D6F90000000001030307) Nov 28 04:41:02 localhost python3.9[255351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:02 localhost python3.9[255461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61061 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD9DAFB0000000001030307) Nov 28 04:41:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52369 DF PROTO=TCP SPT=54008 DPT=9102 SEQ=2498611152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD9DCFA0000000001030307) Nov 28 04:41:04 localhost python3.9[255571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:05 localhost python3.9[255681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61062 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD9E2FA0000000001030307) Nov 28 04:41:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52055 DF PROTO=TCP SPT=39936 DPT=9102 SEQ=2080535726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD9E6FA0000000001030307) Nov 28 04:41:06 localhost python3.9[255791]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:07 localhost python3.9[255901]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:41:07 localhost podman[255989]: 2025-11-28 09:41:07.81822863 +0000 UTC m=+0.086514826 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 04:41:07 localhost podman[255989]: 2025-11-28 09:41:07.833522464 +0000 UTC m=+0.101808710 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:41:07 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:41:07 localhost python3.9[255990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322866.5391157-280-238172815887883/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:08 localhost python3.9[256117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61063 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AD9F2BA0000000001030307) Nov 28 04:41:09 localhost python3.9[256203]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322868.1553445-325-127343057006529/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:09 localhost python3.9[256311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:10 localhost python3.9[256397]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322869.3451304-325-89253515392770/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:10 localhost python3.9[256505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:11 localhost python3.9[256591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322870.4732502-325-124257756122899/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=5873adc378353c4eccfdbaaec218413c8cf1c0ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:12 localhost python3.9[256699]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:13 localhost python3.9[256785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322872.367248-500-155972275857620/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=ecea6b6701c9be6f1d83be82edd3c16fe40b7bb4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:14 localhost python3.9[256893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:14 localhost python3.9[256979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322873.6216002-544-3936382088340/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:15 localhost python3.9[257123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:15 localhost python3.9[257229]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322874.7189987-544-69477358306292/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:16 localhost python3.9[257349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61064 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA12FB0000000001030307) Nov 28 04:41:17 localhost python3.9[257404]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:18 localhost python3.9[257512]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:18 localhost python3.9[257616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764322877.7349155-632-75949131049161/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:41:20 localhost podman[257719]: 2025-11-28 09:41:20.066371344 +0000 UTC m=+0.163866335 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9) Nov 28 04:41:20 localhost podman[257719]: 2025-11-28 09:41:20.083755093 +0000 UTC m=+0.181250074 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 04:41:20 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:41:20 localhost python3.9[257725]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:41:20 localhost python3.9[257856]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:21 localhost python3.9[257966]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:22 localhost python3.9[258023]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:22 localhost python3.9[258133]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:23 localhost python3.9[258190]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:23 localhost python3.9[258300]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:24 localhost python3.9[258410]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:25 localhost python3.9[258467]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:25 localhost python3.9[258577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:26 localhost python3.9[258634]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:27 localhost python3.9[258744]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:41:27 localhost systemd[1]: Reloading. Nov 28 04:41:27 localhost systemd-rc-local-generator[258766]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:41:27 localhost systemd-sysv-generator[258775]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:27 localhost openstack_network_exporter[240973]: ERROR 09:41:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:41:27 localhost openstack_network_exporter[240973]: ERROR 09:41:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:41:27 localhost openstack_network_exporter[240973]: ERROR 09:41:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:41:27 localhost openstack_network_exporter[240973]: ERROR 09:41:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:41:27 localhost openstack_network_exporter[240973]: Nov 28 04:41:27 localhost openstack_network_exporter[240973]: ERROR 09:41:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:41:27 localhost openstack_network_exporter[240973]: Nov 28 04:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:41:28 localhost podman[258892]: 2025-11-28 09:41:28.132572197 +0000 UTC m=+0.083415609 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Nov 28 04:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:41:28 localhost podman[258892]: 2025-11-28 09:41:28.147402358 +0000 UTC m=+0.098245780 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:41:28 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:41:28 localhost podman[258911]: 2025-11-28 09:41:28.2303388 +0000 UTC m=+0.080420905 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:41:28 localhost python3.9[258893]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:28 localhost podman[258911]: 2025-11-28 09:41:28.244464269 +0000 UTC m=+0.094546364 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:41:28 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:41:28 localhost podman[258935]: 2025-11-28 09:41:28.356093303 +0000 UTC m=+0.068587609 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:41:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:41:28 localhost podman[258935]: 2025-11-28 09:41:28.391337157 +0000 UTC m=+0.103831513 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller) Nov 28 04:41:28 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:41:28 localhost systemd[1]: tmp-crun.Xc32tK.mount: Deactivated successfully. Nov 28 04:41:28 localhost podman[258972]: 2025-11-28 09:41:28.452276928 +0000 UTC m=+0.075247706 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Nov 28 04:41:28 localhost podman[258972]: 2025-11-28 09:41:28.462441672 +0000 UTC m=+0.085412451 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:41:28 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:41:28 localhost python3.9[259031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:28 localhost podman[239012]: time="2025-11-28T09:41:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:41:28 localhost podman[239012]: @ - - [28/Nov/2025:09:41:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144036 "" "Go-http-client/1.1" Nov 28 04:41:28 localhost podman[239012]: @ - - [28/Nov/2025:09:41:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16300 "" "Go-http-client/1.1" Nov 28 04:41:29 localhost python3.9[259141]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:29 localhost python3.9[259198]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:30 localhost python3.9[259308]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:41:30 localhost systemd[1]: Reloading. Nov 28 04:41:30 localhost systemd-sysv-generator[259334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:41:30 localhost systemd-rc-local-generator[259330]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:30 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:41:31 localhost systemd[1]: Starting Create netns directory... Nov 28 04:41:31 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 04:41:31 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 04:41:31 localhost systemd[1]: Finished Create netns directory. Nov 28 04:41:31 localhost podman[259346]: 2025-11-28 09:41:31.180734448 +0000 UTC m=+0.088224279 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:41:31 localhost podman[259346]: 2025-11-28 09:41:31.187759085 +0000 UTC m=+0.095248946 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:41:31 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:41:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31079 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA4C290000000001030307) Nov 28 04:41:32 localhost python3.9[259485]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:41:32 localhost python3.9[259595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:41:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31080 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA503B0000000001030307) Nov 28 04:41:33 localhost python3.9[259683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764322892.4640145-1075-157189402429008/.source.json _original_basename=.j4cfuoqs follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61065 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA52FA0000000001030307) Nov 28 04:41:34 localhost python3.9[259793]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31081 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA583A0000000001030307) Nov 28 04:41:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52370 DF PROTO=TCP SPT=54008 DPT=9102 SEQ=2498611152 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA5AFA0000000001030307) Nov 28 04:41:36 localhost python3.9[260101]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False Nov 28 04:41:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:41:37 localhost python3.9[260211]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:41:37 localhost podman[260212]: 2025-11-28 09:41:37.983525679 +0000 UTC m=+0.069500338 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:41:37 localhost podman[260212]: 2025-11-28 09:41:37.992639752 +0000 UTC m=+0.078614401 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:41:38 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:41:38 localhost python3.9[260340]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 28 04:41:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31082 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA67FA0000000001030307) Nov 28 04:41:44 localhost python3[260477]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:41:44 localhost podman[260514]: Nov 28 04:41:44 localhost podman[260514]: 2025-11-28 09:41:44.698707062 +0000 UTC m=+0.076077381 container create 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=neutron_dhcp, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:41:44 localhost podman[260514]: 2025-11-28 09:41:44.657749101 +0000 UTC m=+0.035119440 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 04:41:44 localhost python3[260477]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 04:41:45 localhost python3.9[260661]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:41:46 localhost python3.9[260773]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:46 localhost python3.9[260828]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:41:47 localhost python3.9[260937]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764322906.9300923-1339-173838920908179/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:41:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31083 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADA88FA0000000001030307) Nov 28 04:41:48 localhost python3.9[260992]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:41:48 localhost systemd[1]: Reloading. Nov 28 04:41:48 localhost systemd-rc-local-generator[261015]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:41:48 localhost systemd-sysv-generator[261020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:48 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost python3.9[261083]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:41:49 localhost systemd[1]: Reloading. Nov 28 04:41:49 localhost systemd-rc-local-generator[261109]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:41:49 localhost systemd-sysv-generator[261113]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:41:49 localhost systemd[1]: Starting neutron_dhcp_agent container... Nov 28 04:41:49 localhost systemd[1]: Started libcrun container. Nov 28 04:41:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b02dc597e68fe7ad726f5e333d8ab9d38ed42f81284c1cac4d8e7be783762c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 28 04:41:49 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b02dc597e68fe7ad726f5e333d8ab9d38ed42f81284c1cac4d8e7be783762c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:41:49 localhost podman[261124]: 2025-11-28 09:41:49.55705225 +0000 UTC m=+0.110927903 container init 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}) Nov 28 04:41:49 localhost podman[261124]: 2025-11-28 09:41:49.566298067 +0000 UTC m=+0.120173710 container start 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_dhcp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_managed=true) Nov 28 04:41:49 localhost podman[261124]: neutron_dhcp_agent Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + sudo -E kolla_set_configs Nov 28 04:41:49 localhost systemd[1]: Started neutron_dhcp_agent container. Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Validating config file Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Copying service configuration files Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Writing out command to execute Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/a99cc267a5c3ade03c88b3bb0a43299c9bb62825df6f4ca0c30c03cccfac55c1 Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: ++ cat /run_command Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + CMD=/usr/bin/neutron-dhcp-agent Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + ARGS= Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + sudo kolla_copy_cacerts Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + [[ ! -n '' ]] Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + . kolla_extend_start Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: Running command: '/usr/bin/neutron-dhcp-agent' Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + umask 0022 Nov 28 04:41:49 localhost neutron_dhcp_agent[261136]: + exec /usr/bin/neutron-dhcp-agent Nov 28 04:41:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:41:50 localhost podman[261261]: 2025-11-28 09:41:50.373459472 +0000 UTC m=+0.087983831 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:41:50 localhost podman[261261]: 2025-11-28 09:41:50.386405333 +0000 UTC m=+0.100929642 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350) Nov 28 04:41:50 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:41:50 localhost python3.9[261260]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:41:50 localhost systemd[1]: Stopping neutron_dhcp_agent container... Nov 28 04:41:50 localhost systemd[1]: tmp-crun.YM59mU.mount: Deactivated successfully. Nov 28 04:41:50 localhost systemd[1]: libpod-0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294.scope: Deactivated successfully. Nov 28 04:41:50 localhost systemd[1]: libpod-0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294.scope: Consumed 1.114s CPU time. Nov 28 04:41:50 localhost podman[261284]: 2025-11-28 09:41:50.692935805 +0000 UTC m=+0.079692274 container died 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:41:50 localhost podman[261284]: 2025-11-28 09:41:50.788590433 +0000 UTC m=+0.175346842 container cleanup 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 04:41:50 localhost podman[261284]: neutron_dhcp_agent Nov 28 04:41:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:50.821 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:41:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:50.821 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:41:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:50.822 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:41:50 localhost podman[261325]: error opening file `/run/crun/0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294/status`: No such file or directory Nov 28 04:41:50 localhost podman[261313]: 2025-11-28 09:41:50.888253375 +0000 UTC m=+0.072696937 container cleanup 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 04:41:50 localhost podman[261313]: neutron_dhcp_agent Nov 28 04:41:50 localhost systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully. Nov 28 04:41:50 localhost systemd[1]: Stopped neutron_dhcp_agent container. Nov 28 04:41:50 localhost systemd[1]: Starting neutron_dhcp_agent container... Nov 28 04:41:51 localhost systemd[1]: Started libcrun container. Nov 28 04:41:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b02dc597e68fe7ad726f5e333d8ab9d38ed42f81284c1cac4d8e7be783762c/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff) Nov 28 04:41:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b02dc597e68fe7ad726f5e333d8ab9d38ed42f81284c1cac4d8e7be783762c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:41:51 localhost podman[261327]: 2025-11-28 09:41:51.037736984 +0000 UTC m=+0.113071760 container init 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:41:51 localhost podman[261327]: 2025-11-28 09:41:51.044014778 +0000 UTC m=+0.119349554 container start 0e5a51137dc54e3c7d5de7eadd56c57c951f38ee121a300835ca4e5324dc3294 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '97b631b11487264e1d06ace7cd32b528ca21fc2a7a7b166e6cb3ae7d17fd8dd3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:41:51 localhost podman[261327]: neutron_dhcp_agent Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + sudo -E kolla_set_configs Nov 28 04:41:51 localhost systemd[1]: Started neutron_dhcp_agent container. Nov 28 04:41:51 localhost nova_compute[228497]: 2025-11-28 09:41:51.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Validating config file Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Copying service configuration files Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Writing out command to execute Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/.cache Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/external Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934 Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/a99cc267a5c3ade03c88b3bb0a43299c9bb62825df6f4ca0c30c03cccfac55c1 Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: ++ cat /run_command Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + CMD=/usr/bin/neutron-dhcp-agent Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + ARGS= Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + sudo kolla_copy_cacerts Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + [[ ! -n '' ]] Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + . kolla_extend_start Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\''' Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: Running command: '/usr/bin/neutron-dhcp-agent' Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + umask 0022 Nov 28 04:41:51 localhost neutron_dhcp_agent[261342]: + exec /usr/bin/neutron-dhcp-agent Nov 28 04:41:51 localhost systemd[1]: session-58.scope: Deactivated successfully. Nov 28 04:41:51 localhost systemd[1]: session-58.scope: Consumed 34.033s CPU time. Nov 28 04:41:51 localhost systemd-logind[763]: Session 58 logged out. Waiting for processes to exit. Nov 28 04:41:51 localhost systemd-logind[763]: Removed session 58. Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.070 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.072 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.089 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.090 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.090 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.090 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.090 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:41:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:52.275 261346 INFO neutron.common.config [-] Logging enabled!#033[00m Nov 28 04:41:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:52.275 261346 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.593 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:41:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:52.700 261346 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.786 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.788 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12923MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.788 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.789 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.870 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.871 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:41:52 localhost nova_compute[228497]: 2025-11-28 09:41:52.893 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:41:53 localhost nova_compute[228497]: 2025-11-28 09:41:53.349 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:41:53 localhost nova_compute[228497]: 2025-11-28 09:41:53.355 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:41:53 localhost nova_compute[228497]: 2025-11-28 09:41:53.375 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:41:53 localhost nova_compute[228497]: 2025-11-28 09:41:53.376 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:41:53 localhost nova_compute[228497]: 2025-11-28 09:41:53.377 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:41:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:53.716 261346 INFO neutron.agent.dhcp.agent [None req-89cff752-fd5b-4681-91c5-e1b153e3385a - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 28 04:41:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:53.717 261346 INFO neutron.agent.dhcp.agent [-] Starting network 887157f9-a765-40c0-8be5-1fba3ddea8f8 dhcp configuration#033[00m Nov 28 04:41:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:53.772 261346 INFO neutron.agent.dhcp.agent [-] Starting network 40d5da59-6201-424a-8380-80ecc3d67c7e dhcp configuration#033[00m Nov 28 04:41:54 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:54.302 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 04:41:54 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:54.303 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 04:41:54 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:54.307 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.378 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.378 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.379 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.392 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.393 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.393 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.394 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:54 localhost nova_compute[228497]: 2025-11-28 09:41:54.394 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:41:54 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:54.784 261346 INFO oslo.privsep.daemon [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp7j8wsnzi/privsep.sock']#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.399 261346 INFO oslo.privsep.daemon [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.288 261423 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.293 261423 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.296 261423 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.296 261423 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261423#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.403 261346 WARNING oslo_privsep.priv_context [None req-342297e5-b70c-4019-b8b9-e2d6661e0978 - - - - - -] privsep daemon already running#033[00m Nov 28 04:41:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:55.910 261346 INFO oslo.privsep.daemon [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpkitqw0jv/privsep.sock']#033[00m Nov 28 04:41:56 localhost nova_compute[228497]: 2025-11-28 09:41:56.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:56 localhost nova_compute[228497]: 2025-11-28 09:41:56.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:41:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:56.519 261346 INFO oslo.privsep.daemon [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 28 04:41:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:56.411 261433 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 04:41:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:56.416 261433 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 04:41:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:56.419 261433 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m Nov 28 04:41:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:56.419 261433 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261433#033[00m Nov 28 04:41:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:56.525 261346 WARNING oslo_privsep.priv_context [None req-342297e5-b70c-4019-b8b9-e2d6661e0978 - - - - - -] privsep daemon already running#033[00m Nov 28 04:41:57 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:57.414 261346 INFO oslo.privsep.daemon [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpx53cxqh7/privsep.sock']#033[00m Nov 28 04:41:57 localhost openstack_network_exporter[240973]: ERROR 09:41:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:41:57 localhost openstack_network_exporter[240973]: ERROR 09:41:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:41:57 localhost openstack_network_exporter[240973]: ERROR 09:41:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:41:57 localhost openstack_network_exporter[240973]: ERROR 09:41:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:41:57 localhost openstack_network_exporter[240973]: Nov 28 04:41:57 localhost openstack_network_exporter[240973]: ERROR 09:41:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:41:57 localhost openstack_network_exporter[240973]: Nov 28 04:41:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:58.052 261346 INFO oslo.privsep.daemon [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Spawned new privsep daemon via rootwrap#033[00m Nov 28 04:41:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:57.946 261449 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 04:41:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:57.951 261449 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 04:41:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:57.954 261449 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Nov 28 04:41:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:57.955 261449 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261449#033[00m Nov 28 04:41:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:58.055 261346 WARNING oslo_privsep.priv_context [None req-342297e5-b70c-4019-b8b9-e2d6661e0978 - - - - - -] privsep daemon already running#033[00m Nov 28 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:41:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:41:58 localhost podman[239012]: time="2025-11-28T09:41:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:41:58 localhost podman[239012]: @ - - [28/Nov/2025:09:41:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146341 "" "Go-http-client/1.1" Nov 28 04:41:58 localhost podman[239012]: @ - - [28/Nov/2025:09:41:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16742 "" "Go-http-client/1.1" Nov 28 04:41:58 localhost podman[261455]: 2025-11-28 09:41:58.992907431 +0000 UTC m=+0.103742960 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, tcib_managed=true) Nov 28 04:41:59 localhost podman[261455]: 2025-11-28 09:41:59.001636923 +0000 UTC m=+0.112472402 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm) Nov 28 04:41:59 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:41:59 localhost systemd[1]: tmp-crun.1RSn0X.mount: Deactivated successfully. Nov 28 04:41:59 localhost podman[261456]: 2025-11-28 09:41:59.090574362 +0000 UTC m=+0.195831838 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 28 04:41:59 localhost podman[261457]: 2025-11-28 09:41:59.061266813 +0000 UTC m=+0.170044387 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:41:59 localhost podman[261458]: 2025-11-28 09:41:59.15687506 +0000 UTC m=+0.261495715 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:41:59 localhost podman[261456]: 2025-11-28 09:41:59.160375728 +0000 UTC m=+0.265633144 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:41:59 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:41:59 localhost podman[261457]: 2025-11-28 09:41:59.178410908 +0000 UTC m=+0.287188442 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 04:41:59 localhost podman[261458]: 2025-11-28 09:41:59.192450244 +0000 UTC m=+0.297070919 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:41:59 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:41:59 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:41:59 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:59.443 261346 INFO neutron.agent.linux.ip_lib [None req-342297e5-b70c-4019-b8b9-e2d6661e0978 - - - - - -] Device tap8af1236c-20 cannot be used as it has no MAC address#033[00m Nov 28 04:41:59 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:41:59.447 261346 INFO neutron.agent.linux.ip_lib [None req-17aca3bb-5100-48e6-8b92-ae59d394f285 - - - - - -] Device tapb51f2386-0b cannot be used as it has no MAC address#033[00m Nov 28 04:41:59 localhost kernel: device tap8af1236c-20 entered promiscuous mode Nov 28 04:41:59 localhost NetworkManager[5965]: [1764322919.5242] manager: (tap8af1236c-20): new Generic device (/org/freedesktop/NetworkManager/Devices/13) Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00025|binding|INFO|Claiming lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf for this chassis. Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00026|binding|INFO|8af1236c-205e-4af9-a882-ccde7f9d3ecf: Claiming unknown Nov 28 04:41:59 localhost systemd-udevd[261554]: Network interface NamePolicy= disabled on kernel command line. Nov 28 04:41:59 localhost kernel: device tapb51f2386-0b entered promiscuous mode Nov 28 04:41:59 localhost NetworkManager[5965]: [1764322919.5351] manager: (tapb51f2386-0b): new Generic device (/org/freedesktop/NetworkManager/Devices/14) Nov 28 04:41:59 localhost systemd-udevd[261557]: Network interface NamePolicy= disabled on kernel command line. Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00027|ovn_bfd|INFO|Enabled BFD on interface ovn-c3237d-0 Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-11aa47-0 Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-07900d-0 Nov 28 04:41:59 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:59.544 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-887157f9-a765-40c0-8be5-1fba3ddea8f8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-887157f9-a765-40c0-8be5-1fba3ddea8f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dda653c53224db086060962b0702694', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5520a81-bbe1-4feb-9859-6165eafc855d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=8af1236c-205e-4af9-a882-ccde7f9d3ecf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 04:41:59 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:59.549 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8af1236c-205e-4af9-a882-ccde7f9d3ecf in datapath 887157f9-a765-40c0-8be5-1fba3ddea8f8 bound to our chassis#033[00m Nov 28 04:41:59 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:59.553 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port 443f831a-83a9-4df5-adbb-6fdf4d706460 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 04:41:59 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:59.553 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 887157f9-a765-40c0-8be5-1fba3ddea8f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 04:41:59 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:59.555 158530 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp32linmc5/privsep.sock']#033[00m Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00030|binding|INFO|Claiming lport b51f2386-0b9d-42f5-9ce1-e7fa1b564192 for this chassis. Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00031|binding|INFO|b51f2386-0b9d-42f5-9ce1-e7fa1b564192: Claiming unknown Nov 28 04:41:59 localhost ovn_metadata_agent[158525]: 2025-11-28 09:41:59.609 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-40d5da59-6201-424a-8380-80ecc3d67c7e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-40d5da59-6201-424a-8380-80ecc3d67c7e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dda653c53224db086060962b0702694', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f3122580-f73f-40fa-a838-6bad2ff9da2f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=b51f2386-0b9d-42f5-9ce1-e7fa1b564192) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00032|binding|INFO|Setting lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf ovn-installed in OVS Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00033|binding|INFO|Setting lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf up in Southbound Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00034|binding|INFO|Setting lport b51f2386-0b9d-42f5-9ce1-e7fa1b564192 ovn-installed in OVS Nov 28 04:41:59 localhost ovn_controller[152726]: 2025-11-28T09:41:59Z|00035|binding|INFO|Setting lport b51f2386-0b9d-42f5-9ce1-e7fa1b564192 up in Southbound Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.195 158530 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.196 158530 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp32linmc5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.089 261619 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.094 261619 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.099 261619 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.100 261619 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261619#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.199 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[6c148fed-fc0b-43cf-9395-0948098b0bde]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.599 261619 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.599 261619 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.599 261619 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:42:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.697 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[51ad5e54-84e8-4518-b785-21f5be70137e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.699 158530 INFO neutron.agent.ovn.metadata.agent [-] Port b51f2386-0b9d-42f5-9ce1-e7fa1b564192 in datapath 40d5da59-6201-424a-8380-80ecc3d67c7e unbound from our chassis#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.701 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port d15a465b-1a05-4da0-8002-47b641f332f3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.701 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 40d5da59-6201-424a-8380-80ecc3d67c7e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 04:42:00 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:00.702 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb3f4d9-6725-45be-99e1-b02910aebb6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 04:42:00 localhost podman[261669]: Nov 28 04:42:00 localhost podman[261669]: 2025-11-28 09:42:00.773522722 +0000 UTC m=+0.081668555 container create dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:42:00 localhost podman[261683]: Nov 28 04:42:00 localhost podman[261683]: 2025-11-28 09:42:00.805771793 +0000 UTC m=+0.078437765 container create 89b3dc0bb55924bc1fd5f8ac3bf996eec3cf2dde4e5a3645e78d8d092d18d9b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40d5da59-6201-424a-8380-80ecc3d67c7e, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 28 04:42:00 localhost systemd[1]: Started libpod-conmon-dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439.scope. Nov 28 04:42:00 localhost systemd[1]: Started libcrun container. Nov 28 04:42:00 localhost podman[261669]: 2025-11-28 09:42:00.733622534 +0000 UTC m=+0.041768397 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 04:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7cff75508476df411b334ef64aedbb65646b8067d2b7c094a8dcb894216f571/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:42:00 localhost systemd[1]: Started libpod-conmon-89b3dc0bb55924bc1fd5f8ac3bf996eec3cf2dde4e5a3645e78d8d092d18d9b4.scope. Nov 28 04:42:00 localhost podman[261669]: 2025-11-28 09:42:00.842955876 +0000 UTC m=+0.151101709 container init dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:42:00 localhost systemd[1]: Started libcrun container. Nov 28 04:42:00 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47cb17eb211f19f0dbd29cd4ba5c46fd5d30452237ee450b25f44eaeac446706/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 04:42:00 localhost podman[261669]: 2025-11-28 09:42:00.852611936 +0000 UTC m=+0.160757769 container start dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:42:00 localhost dnsmasq[261709]: started, version 2.85 cachesize 150 Nov 28 04:42:00 localhost dnsmasq[261709]: DNS service limited to local subnets Nov 28 04:42:00 localhost podman[261683]: 2025-11-28 09:42:00.858123367 +0000 UTC m=+0.130789339 container init 89b3dc0bb55924bc1fd5f8ac3bf996eec3cf2dde4e5a3645e78d8d092d18d9b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40d5da59-6201-424a-8380-80ecc3d67c7e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:42:00 localhost dnsmasq[261709]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 04:42:00 localhost dnsmasq[261709]: warning: no upstream servers configured Nov 28 04:42:00 localhost dnsmasq-dhcp[261709]: DHCP, static leases only on 192.168.122.0, lease time 1d Nov 28 04:42:00 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 04:42:00 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 04:42:00 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 04:42:00 localhost podman[261683]: 2025-11-28 09:42:00.763644736 +0000 UTC m=+0.036310728 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 04:42:00 localhost podman[261683]: 2025-11-28 09:42:00.869296933 +0000 UTC m=+0.141962915 container start 89b3dc0bb55924bc1fd5f8ac3bf996eec3cf2dde4e5a3645e78d8d092d18d9b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-40d5da59-6201-424a-8380-80ecc3d67c7e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 04:42:00 localhost dnsmasq[261711]: started, version 2.85 cachesize 150 Nov 28 04:42:00 localhost dnsmasq[261711]: DNS service limited to local subnets Nov 28 04:42:00 localhost dnsmasq[261711]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 04:42:00 localhost dnsmasq[261711]: warning: no upstream servers configured Nov 28 04:42:00 localhost dnsmasq-dhcp[261711]: DHCP, static leases only on 192.168.0.0, lease time 1d Nov 28 04:42:00 localhost dnsmasq[261711]: read /var/lib/neutron/dhcp/40d5da59-6201-424a-8380-80ecc3d67c7e/addn_hosts - 2 addresses Nov 28 04:42:00 localhost dnsmasq-dhcp[261711]: read /var/lib/neutron/dhcp/40d5da59-6201-424a-8380-80ecc3d67c7e/host Nov 28 04:42:00 localhost dnsmasq-dhcp[261711]: read /var/lib/neutron/dhcp/40d5da59-6201-424a-8380-80ecc3d67c7e/opts Nov 28 04:42:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:42:00.910 261346 INFO neutron.agent.dhcp.agent [None req-2543693c-d7f7-4a61-8938-f9f318f3eb76 - - - - - -] Finished network 887157f9-a765-40c0-8be5-1fba3ddea8f8 dhcp configuration#033[00m Nov 28 04:42:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:42:00.913 261346 INFO neutron.agent.dhcp.agent [None req-77827c0b-294d-473d-976f-8b23782efbcf - - - - - -] Finished network 40d5da59-6201-424a-8380-80ecc3d67c7e dhcp configuration#033[00m Nov 28 04:42:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:42:00.913 261346 INFO neutron.agent.dhcp.agent [None req-89cff752-fd5b-4681-91c5-e1b153e3385a - - - - - -] Synchronizing state complete#033[00m Nov 28 04:42:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:42:00.975 261346 INFO neutron.agent.dhcp.agent [None req-89cff752-fd5b-4681-91c5-e1b153e3385a - - - - - -] DHCP agent started#033[00m Nov 28 04:42:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:42:01 localhost podman[261712]: 2025-11-28 09:42:01.973195366 +0000 UTC m=+0.079241390 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:42:02 localhost podman[261712]: 2025-11-28 09:42:02.010458133 +0000 UTC m=+0.116504177 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:42:02 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:42:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35404 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADAC1590000000001030307) Nov 28 04:42:02 localhost neutron_dhcp_agent[261342]: 2025-11-28 09:42:02.447 261346 INFO neutron.agent.dhcp.agent [None req-9c3ad580-6cc4-4b63-a120-686d2232cbd8 - - - - - -] DHCP configuration for ports {'50fa6f67-abd9-48d7-aedb-8ca08cff0a66', 'a05cd915-7bc5-46e4-9c1e-3efad949112b', '3ff57c88-06c6-4894-984a-80ce116d1456', '09612b07-5142-4b0f-9dab-74bf4403f69f', 'c11672ac-31d9-4e35-992c-9c2cc8fbd9ff', '4a0a3326-6d12-4d57-91f4-2bd267c644b1'} is completed#033[00m Nov 28 04:42:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35405 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADAC57B0000000001030307) Nov 28 04:42:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31084 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADAC8FA0000000001030307) Nov 28 04:42:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35406 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADACD7B0000000001030307) Nov 28 04:42:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61066 DF PROTO=TCP SPT=38018 DPT=9102 SEQ=3204140831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADAD0FA0000000001030307) Nov 28 04:42:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:42:09 localhost podman[261736]: 2025-11-28 09:42:09.12835799 +0000 UTC m=+0.234417913 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Nov 28 04:42:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35407 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADADD3B0000000001030307) Nov 28 04:42:09 localhost podman[261736]: 2025-11-28 09:42:09.28560044 +0000 UTC m=+0.391660413 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0) Nov 28 04:42:09 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:42:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35408 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADAFCFA0000000001030307) Nov 28 04:42:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:42:20 localhost podman[261900]: 2025-11-28 09:42:20.842168297 +0000 UTC m=+0.063278075 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal) Nov 28 04:42:20 localhost podman[261900]: 2025-11-28 09:42:20.853794678 +0000 UTC m=+0.074904446 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, version=9.6, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:42:20 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:42:27 localhost openstack_network_exporter[240973]: ERROR 09:42:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:42:27 localhost openstack_network_exporter[240973]: ERROR 09:42:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:42:27 localhost openstack_network_exporter[240973]: ERROR 09:42:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:42:27 localhost openstack_network_exporter[240973]: ERROR 09:42:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:42:27 localhost openstack_network_exporter[240973]: Nov 28 04:42:27 localhost openstack_network_exporter[240973]: ERROR 09:42:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:42:27 localhost openstack_network_exporter[240973]: Nov 28 04:42:28 localhost podman[239012]: time="2025-11-28T09:42:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:42:28 localhost podman[239012]: @ - - [28/Nov/2025:09:42:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:42:28 localhost podman[239012]: @ - - [28/Nov/2025:09:42:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17691 "" "Go-http-client/1.1" Nov 28 04:42:29 localhost ovn_controller[152726]: 2025-11-28T09:42:29Z|00036|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory Nov 28 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:42:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:42:29 localhost podman[261921]: 2025-11-28 09:42:29.988624089 +0000 UTC m=+0.095080532 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm) Nov 28 04:42:30 localhost podman[261921]: 2025-11-28 09:42:30.001427965 +0000 UTC m=+0.107884368 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:42:30 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:42:30 localhost podman[261922]: 2025-11-28 09:42:30.051110597 +0000 UTC m=+0.150898663 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 28 04:42:30 localhost podman[261922]: 2025-11-28 09:42:30.089734706 +0000 UTC m=+0.189522742 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:42:30 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:42:30 localhost podman[261923]: 2025-11-28 09:42:30.105396232 +0000 UTC m=+0.201222955 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:42:30 localhost podman[261929]: 2025-11-28 09:42:30.144396561 +0000 UTC m=+0.239471821 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:42:30 localhost podman[261929]: 2025-11-28 09:42:30.15205961 +0000 UTC m=+0.247134840 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:42:30 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:42:30 localhost podman[261923]: 2025-11-28 09:42:30.235012164 +0000 UTC m=+0.330838867 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Nov 28 04:42:30 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:42:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31227 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB36890000000001030307) Nov 28 04:42:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:42:32 localhost podman[262002]: 2025-11-28 09:42:32.968000515 +0000 UTC m=+0.077217437 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:42:32 localhost podman[262002]: 2025-11-28 09:42:32.975509548 +0000 UTC m=+0.084726420 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:42:32 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:42:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31228 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB3A7A0000000001030307) Nov 28 04:42:33 localhost sshd[262026]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:42:33 localhost systemd-logind[763]: New session 59 of user zuul. Nov 28 04:42:33 localhost systemd[1]: Started Session 59 of User zuul. Nov 28 04:42:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35409 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB3CFA0000000001030307) Nov 28 04:42:34 localhost python3.9[262137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:42:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31229 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB427A0000000001030307) Nov 28 04:42:36 localhost python3.9[262249]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:42:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31085 DF PROTO=TCP SPT=36740 DPT=9102 SEQ=3245871172 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB46FA0000000001030307) Nov 28 04:42:36 localhost network[262266]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:42:36 localhost network[262267]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:42:36 localhost network[262268]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:42:37 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:42:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31230 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB523A0000000001030307) Nov 28 04:42:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:42:39 localhost systemd[1]: tmp-crun.hNyOOJ.mount: Deactivated successfully. Nov 28 04:42:39 localhost podman[262481]: 2025-11-28 09:42:39.987367947 +0000 UTC m=+0.090790578 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true) Nov 28 04:42:40 localhost podman[262481]: 2025-11-28 09:42:40.027460621 +0000 UTC m=+0.130883222 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true) Nov 28 04:42:40 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:42:40 localhost python3.9[262519]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d Nov 28 04:42:41 localhost python3.9[262585]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:42:45 localhost nova_compute[228497]: 2025-11-28 09:42:45.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:45 localhost nova_compute[228497]: 2025-11-28 09:42:45.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 04:42:45 localhost nova_compute[228497]: 2025-11-28 09:42:45.114 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 04:42:45 localhost python3.9[262697]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:42:46 localhost python3.9[262807]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:42:47 localhost python3.9[262918]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:42:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31231 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADB72FA0000000001030307) Nov 28 04:42:48 localhost python3.9[263030]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:42:49 localhost nova_compute[228497]: 2025-11-28 09:42:49.109 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:50 localhost python3.9[263140]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:42:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:50.822 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:42:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:50.823 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:42:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:42:50.824 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:42:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:42:51 localhost podman[263198]: 2025-11-28 09:42:51.009815578 +0000 UTC m=+0.097770354 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible) Nov 28 04:42:51 localhost podman[263198]: 2025-11-28 09:42:51.050604439 +0000 UTC m=+0.138559215 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:42:51 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:42:51 localhost python3.9[263272]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.105 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.106 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.106 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.107 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.107 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.545 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.721 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.722 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12548MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.722 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.722 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.887 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.887 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:42:52 localhost nova_compute[228497]: 2025-11-28 09:42:52.976 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.043 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.044 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.057 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.075 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_AESNI,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NODE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_FMA3,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_SSE2,HW_CPU_X86_SSSE3,COMPUTE_RESCUE_BFV,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE41,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.092 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.532 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.539 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.561 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.563 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:42:53 localhost nova_compute[228497]: 2025-11-28 09:42:53.563 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:42:53 localhost python3.9[263426]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:42:53 localhost network[263445]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:42:53 localhost network[263446]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:42:53 localhost network[263447]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:42:54 localhost nova_compute[228497]: 2025-11-28 09:42:54.560 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:54 localhost nova_compute[228497]: 2025-11-28 09:42:54.560 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:54 localhost nova_compute[228497]: 2025-11-28 09:42:54.561 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:54 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.075 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.076 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.076 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.145 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.145 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.146 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:42:55 localhost nova_compute[228497]: 2025-11-28 09:42:55.146 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:56 localhost nova_compute[228497]: 2025-11-28 09:42:56.087 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:57 localhost nova_compute[228497]: 2025-11-28 09:42:57.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:57 localhost openstack_network_exporter[240973]: ERROR 09:42:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:42:57 localhost openstack_network_exporter[240973]: ERROR 09:42:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:42:57 localhost openstack_network_exporter[240973]: ERROR 09:42:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:42:57 localhost openstack_network_exporter[240973]: ERROR 09:42:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:42:57 localhost openstack_network_exporter[240973]: Nov 28 04:42:57 localhost openstack_network_exporter[240973]: ERROR 09:42:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:42:57 localhost openstack_network_exporter[240973]: Nov 28 04:42:58 localhost nova_compute[228497]: 2025-11-28 09:42:58.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:58 localhost nova_compute[228497]: 2025-11-28 09:42:58.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:42:58 localhost nova_compute[228497]: 2025-11-28 09:42:58.075 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 04:42:58 localhost python3.9[263681]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 04:42:58 localhost podman[239012]: time="2025-11-28T09:42:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:42:58 localhost podman[239012]: @ - - [28/Nov/2025:09:42:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:42:58 localhost podman[239012]: @ - - [28/Nov/2025:09:42:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17699 "" "Go-http-client/1.1" Nov 28 04:42:59 localhost python3.9[263791]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled Nov 28 04:43:00 localhost python3.9[263901]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:43:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:43:00 localhost podman[263959]: 2025-11-28 09:43:00.666852733 +0000 UTC m=+0.092762919 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:43:00 localhost podman[263959]: 2025-11-28 09:43:00.677737369 +0000 UTC m=+0.103647605 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:43:00 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:43:00 localhost podman[263960]: 2025-11-28 09:43:00.728438667 +0000 UTC m=+0.150027920 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller) Nov 28 04:43:00 localhost python3.9[263958]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:00 localhost systemd[1]: tmp-crun.qQfLa5.mount: Deactivated successfully. Nov 28 04:43:00 localhost podman[263960]: 2025-11-28 09:43:00.772690955 +0000 UTC m=+0.194280238 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:43:00 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:43:00 localhost podman[263961]: 2025-11-28 09:43:00.827818628 +0000 UTC m=+0.248454681 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 04:43:00 localhost podman[263961]: 2025-11-28 09:43:00.861449998 +0000 UTC m=+0.282086091 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:43:00 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:43:00 localhost podman[263967]: 2025-11-28 09:43:00.776452901 +0000 UTC m=+0.188853669 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:43:00 localhost podman[263967]: 2025-11-28 09:43:00.909300357 +0000 UTC m=+0.321701075 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:43:00 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:43:01 localhost python3.9[264153]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25821 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBABB90000000001030307) Nov 28 04:43:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25822 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBAFBA0000000001030307) Nov 28 04:43:03 localhost python3.9[264263]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:43:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31232 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBB2FA0000000001030307) Nov 28 04:43:03 localhost systemd[1]: tmp-crun.zUeMs8.mount: Deactivated successfully. Nov 28 04:43:03 localhost podman[264319]: 2025-11-28 09:43:03.984853341 +0000 UTC m=+0.094040748 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:43:03 localhost podman[264319]: 2025-11-28 09:43:03.999400661 +0000 UTC m=+0.108588028 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:43:04 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:43:04 localhost python3.9[264396]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:43:04 localhost nova_compute[228497]: 2025-11-28 09:43:04.565 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:05 localhost python3.9[264508]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:43:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25823 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBB7BA0000000001030307) Nov 28 04:43:05 localhost python3.9[264620]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:43:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35410 DF PROTO=TCP SPT=34354 DPT=9102 SEQ=908223820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBBAFB0000000001030307) Nov 28 04:43:06 localhost python3.9[264731]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:06 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation. Nov 28 04:43:06 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:43:06 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:43:06 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:43:07 localhost python3.9[264842]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:08 localhost python3.9[264952]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:09 localhost python3.9[265062]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25824 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBC77A0000000001030307) Nov 28 04:43:09 localhost python3.9[265172]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line= user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:43:10 localhost systemd[1]: tmp-crun.24VdHx.mount: Deactivated successfully. Nov 28 04:43:10 localhost podman[265283]: 2025-11-28 09:43:10.325766676 +0000 UTC m=+0.087213287 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.schema-version=1.0) Nov 28 04:43:10 localhost podman[265283]: 2025-11-28 09:43:10.337097966 +0000 UTC m=+0.098544607 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 28 04:43:10 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:43:10 localhost python3.9[265282]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:43:11 localhost python3.9[265413]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:12 localhost python3.9[265523]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:13 localhost python3.9[265580]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:14 localhost python3.9[265690]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:14 localhost python3.9[265747]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:15 localhost python3.9[265857]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:16 localhost python3.9[265967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:17 localhost python3.9[266024]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25825 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADBE6FA0000000001030307) Nov 28 04:43:17 localhost python3.9[266134]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:18 localhost python3.9[266191]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:19 localhost python3.9[266301]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:43:19 localhost systemd[1]: Reloading. Nov 28 04:43:19 localhost systemd-rc-local-generator[266328]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:43:19 localhost systemd-sysv-generator[266332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:19 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:20 localhost python3.9[266449]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:20 localhost python3.9[266506]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:43:21 localhost podman[266672]: 2025-11-28 09:43:21.589324898 +0000 UTC m=+0.072085149 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41) Nov 28 04:43:21 localhost podman[266672]: 2025-11-28 09:43:21.635877347 +0000 UTC m=+0.118637528 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:43:21 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:43:21 localhost python3.9[266671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:22 localhost python3.9[266760]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:23 localhost python3.9[266888]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:43:23 localhost systemd[1]: Reloading. Nov 28 04:43:23 localhost systemd-rc-local-generator[266911]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:43:23 localhost systemd-sysv-generator[266914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:23 localhost systemd[1]: Starting Create netns directory... Nov 28 04:43:23 localhost systemd[1]: run-netns-placeholder.mount: Deactivated successfully. Nov 28 04:43:23 localhost systemd[1]: netns-placeholder.service: Deactivated successfully. Nov 28 04:43:23 localhost systemd[1]: Finished Create netns directory. Nov 28 04:43:24 localhost python3.9[267039]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:25 localhost python3.9[267149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:25 localhost python3.9[267206]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:26 localhost python3.9[267316]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:43:27 localhost python3.9[267426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:27 localhost openstack_network_exporter[240973]: ERROR 09:43:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:43:27 localhost openstack_network_exporter[240973]: ERROR 09:43:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:43:27 localhost openstack_network_exporter[240973]: ERROR 09:43:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:43:27 localhost openstack_network_exporter[240973]: ERROR 09:43:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:43:27 localhost openstack_network_exporter[240973]: Nov 28 04:43:27 localhost openstack_network_exporter[240973]: ERROR 09:43:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:43:27 localhost openstack_network_exporter[240973]: Nov 28 04:43:27 localhost python3.9[267483]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.samh7goi recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:28 localhost python3.9[267593]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:28 localhost podman[239012]: time="2025-11-28T09:43:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:43:28 localhost podman[239012]: @ - - [28/Nov/2025:09:43:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:43:28 localhost podman[239012]: @ - - [28/Nov/2025:09:43:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17701 "" "Go-http-client/1.1" Nov 28 04:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:43:30 localhost podman[267871]: 2025-11-28 09:43:30.982860587 +0000 UTC m=+0.092741228 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible) Nov 28 04:43:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:43:31 localhost podman[267872]: 2025-11-28 09:43:31.038632552 +0000 UTC m=+0.148427380 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 04:43:31 localhost podman[267871]: 2025-11-28 09:43:31.04604958 +0000 UTC m=+0.155930231 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 04:43:31 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:43:31 localhost python3.9[267870]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False Nov 28 04:43:31 localhost podman[267909]: 2025-11-28 09:43:31.125683863 +0000 UTC m=+0.126509493 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:43:31 localhost podman[267909]: 2025-11-28 09:43:31.133711651 +0000 UTC m=+0.134537261 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:43:31 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:43:31 localhost podman[267873]: 2025-11-28 09:43:31.205705596 +0000 UTC m=+0.310022265 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 28 04:43:31 localhost podman[267873]: 2025-11-28 09:43:31.213529518 +0000 UTC m=+0.317846197 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:43:31 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:43:31 localhost podman[267872]: 2025-11-28 09:43:31.257861419 +0000 UTC m=+0.367656187 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:43:31 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:43:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33657 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC20E90000000001030307) Nov 28 04:43:32 localhost python3.9[268062]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:43:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33658 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC24FB0000000001030307) Nov 28 04:43:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25826 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC26FA0000000001030307) Nov 28 04:43:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:43:34 localhost podman[268173]: 2025-11-28 09:43:34.446009054 +0000 UTC m=+0.078169168 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:43:34 localhost podman[268173]: 2025-11-28 09:43:34.45267011 +0000 UTC m=+0.084830204 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:43:34 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:43:34 localhost python3.9[268172]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None Nov 28 04:43:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33659 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC2CFB0000000001030307) Nov 28 04:43:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31233 DF PROTO=TCP SPT=40914 DPT=9102 SEQ=318385806 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC30FA0000000001030307) Nov 28 04:43:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33660 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC3CBA0000000001030307) Nov 28 04:43:39 localhost python3[268331]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:43:39 localhost python3[268331]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f",#012 "Digest": "sha256:6296d2d95faaeb90443ee98443b39aa81b5152414f9542335d72711bb15fefdd",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:6296d2d95faaeb90443ee98443b39aa81b5152414f9542335d72711bb15fefdd"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:12:42.268223466Z",#012 "Config": {#012 "User": "root",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 249482220,#012 "VirtualSize": 249482220,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/da9f726a106a4f4af24ed404443eca5cd50a43c6e5c864c256f158761c28e938/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/da9f726a106a4f4af24ed404443eca5cd50a43c6e5c864c256f158761c28e938/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:135e1f5eea0bd6ac73fc43c122f58d5ed97cb8a56365c4a958c72d470055986b"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "root",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:37.752912815Z",#012 "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:38.066850603Z",#012 Nov 28 04:43:40 localhost python3.9[268503]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:43:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:43:40 localhost podman[268584]: 2025-11-28 09:43:40.974494827 +0000 UTC m=+0.073811343 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 28 04:43:40 localhost podman[268584]: 2025-11-28 09:43:40.991469982 +0000 UTC m=+0.090786518 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:43:41 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:43:41 localhost python3.9[268634]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:41 localhost python3.9[268689]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:43:42 localhost python3.9[268798]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764323021.6852741-1367-158546594071798/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:43 localhost python3.9[268853]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:43:44 localhost python3.9[268963]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:43:46 localhost python3.9[269073]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:47 localhost python3.9[269183]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None Nov 28 04:43:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33661 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC5CFA0000000001030307) Nov 28 04:43:47 localhost python3.9[269293]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled Nov 28 04:43:48 localhost python3.9[269403]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:43:49 localhost python3.9[269460]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:49 localhost python3.9[269570]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:43:50.823 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:43:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:43:50.824 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:43:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:43:50.824 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:43:50 localhost python3.9[269680]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None Nov 28 04:43:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:43:51 localhost systemd[1]: tmp-crun.ykMeot.mount: Deactivated successfully. Nov 28 04:43:51 localhost podman[269683]: 2025-11-28 09:43:51.984893955 +0000 UTC m=+0.088567040 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible) Nov 28 04:43:51 localhost podman[269683]: 2025-11-28 09:43:51.997540546 +0000 UTC m=+0.101213610 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 04:43:52 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.162 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.162 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.163 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.163 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.163 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.608 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.803 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.805 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12432MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.806 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.806 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.874 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.874 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:43:52 localhost nova_compute[228497]: 2025-11-28 09:43:52.900 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:43:53 localhost nova_compute[228497]: 2025-11-28 09:43:53.352 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:43:53 localhost nova_compute[228497]: 2025-11-28 09:43:53.358 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:43:53 localhost nova_compute[228497]: 2025-11-28 09:43:53.462 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:43:53 localhost nova_compute[228497]: 2025-11-28 09:43:53.464 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:43:53 localhost nova_compute[228497]: 2025-11-28 09:43:53.465 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:43:54 localhost nova_compute[228497]: 2025-11-28 09:43:54.461 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:54 localhost nova_compute[228497]: 2025-11-28 09:43:54.462 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:54 localhost nova_compute[228497]: 2025-11-28 09:43:54.462 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:54 localhost python3.9[269853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Nov 28 04:43:55 localhost nova_compute[228497]: 2025-11-28 09:43:55.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:55 localhost nova_compute[228497]: 2025-11-28 09:43:55.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:43:55 localhost nova_compute[228497]: 2025-11-28 09:43:55.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:43:55 localhost nova_compute[228497]: 2025-11-28 09:43:55.091 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:43:55 localhost python3.9[269967]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:43:56 localhost nova_compute[228497]: 2025-11-28 09:43:56.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:57 localhost python3.9[270077]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:43:57 localhost systemd[1]: Reloading. Nov 28 04:43:57 localhost nova_compute[228497]: 2025-11-28 09:43:57.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:43:57 localhost nova_compute[228497]: 2025-11-28 09:43:57.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:43:57 localhost systemd-rc-local-generator[270101]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:43:57 localhost systemd-sysv-generator[270106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:43:57 localhost openstack_network_exporter[240973]: ERROR 09:43:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:43:57 localhost openstack_network_exporter[240973]: ERROR 09:43:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:43:57 localhost openstack_network_exporter[240973]: ERROR 09:43:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:43:57 localhost openstack_network_exporter[240973]: ERROR 09:43:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:43:57 localhost openstack_network_exporter[240973]: Nov 28 04:43:57 localhost openstack_network_exporter[240973]: ERROR 09:43:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:43:57 localhost openstack_network_exporter[240973]: Nov 28 04:43:58 localhost python3.9[270221]: ansible-ansible.builtin.service_facts Invoked Nov 28 04:43:58 localhost network[270238]: You are using 'network' service provided by 'network-scripts', which are now deprecated. Nov 28 04:43:58 localhost network[270239]: 'network-scripts' will be removed from distribution in near future. Nov 28 04:43:58 localhost network[270240]: It is advised to switch to 'NetworkManager' instead for network management. Nov 28 04:43:58 localhost podman[239012]: time="2025-11-28T09:43:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:43:58 localhost podman[239012]: @ - - [28/Nov/2025:09:43:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:43:58 localhost podman[239012]: @ - - [28/Nov/2025:09:43:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17699 "" "Go-http-client/1.1" Nov 28 04:43:59 localhost nova_compute[228497]: 2025-11-28 09:43:59.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:00 localhost nova_compute[228497]: 2025-11-28 09:44:00.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.615 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.616 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:44:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:44:01 localhost podman[270272]: 2025-11-28 09:44:01.248170507 +0000 UTC m=+0.088891269 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 04:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:44:01 localhost podman[270272]: 2025-11-28 09:44:01.264476161 +0000 UTC m=+0.105196913 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.build-date=20251125) Nov 28 04:44:01 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:44:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:44:01 localhost systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:44:01 localhost podman[270317]: 2025-11-28 09:44:01.379770055 +0000 UTC m=+0.075017199 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:44:01 localhost podman[270306]: 2025-11-28 09:44:01.338682226 +0000 UTC m=+0.076332182 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:44:01 localhost podman[270273]: 2025-11-28 09:44:01.360497519 +0000 UTC m=+0.198315011 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:44:01 localhost podman[270306]: 2025-11-28 09:44:01.421364561 +0000 UTC m=+0.159014527 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Nov 28 04:44:01 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:44:01 localhost podman[270273]: 2025-11-28 09:44:01.495967458 +0000 UTC m=+0.333784880 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:44:01 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:44:01 localhost podman[270317]: 2025-11-28 09:44:01.547768589 +0000 UTC m=+0.243015723 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:44:01 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:44:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28298 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC96190000000001030307) Nov 28 04:44:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28299 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC9A3A0000000001030307) Nov 28 04:44:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33662 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADC9CFA0000000001030307) Nov 28 04:44:04 localhost python3.9[270558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:44:04 localhost podman[270670]: 2025-11-28 09:44:04.731207539 +0000 UTC m=+0.081279804 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:44:04 localhost podman[270670]: 2025-11-28 09:44:04.74355161 +0000 UTC m=+0.093623805 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:44:04 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:44:04 localhost python3.9[270669]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28300 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADCA23A0000000001030307) Nov 28 04:44:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25827 DF PROTO=TCP SPT=38760 DPT=9102 SEQ=1919062088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADCA4FA0000000001030307) Nov 28 04:44:06 localhost python3.9[270803]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:07 localhost python3.9[270914]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28301 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADCB1FA0000000001030307) Nov 28 04:44:09 localhost python3.9[271025]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:09 localhost python3.9[271136]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:10 localhost python3.9[271247]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:44:11 localhost podman[271283]: 2025-11-28 09:44:11.981154517 +0000 UTC m=+0.083009538 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Nov 28 04:44:11 localhost podman[271283]: 2025-11-28 09:44:11.994830638 +0000 UTC m=+0.096685649 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 04:44:12 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:44:12 localhost python3.9[271376]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:44:13 localhost python3.9[271487]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:14 localhost python3.9[271597]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:14 localhost python3.9[271707]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:15 localhost python3.9[271817]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:16 localhost python3.9[271927]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:17 localhost python3.9[272037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28302 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADCD2FA0000000001030307) Nov 28 04:44:18 localhost python3.9[272147]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:19 localhost python3.9[272257]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:19 localhost python3.9[272367]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:20 localhost python3.9[272477]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:21 localhost python3.9[272587]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:21 localhost python3.9[272697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:22 localhost python3.9[272807]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:44:22 localhost podman[272922]: 2025-11-28 09:44:22.566669336 +0000 UTC m=+0.097758624 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible) Nov 28 04:44:22 localhost podman[272922]: 2025-11-28 09:44:22.575820479 +0000 UTC m=+0.106909767 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal) Nov 28 04:44:22 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:44:22 localhost python3.9[272962]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:23 localhost python3.9[273141]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:23 localhost podman[273155]: 2025-11-28 09:44:23.311038989 +0000 UTC m=+0.087336371 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, RELEASE=main, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Nov 28 04:44:23 localhost podman[273155]: 2025-11-28 09:44:23.427484149 +0000 UTC m=+0.203781501 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux ) Nov 28 04:44:24 localhost python3.9[273369]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:44:24 localhost python3.9[273512]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012 systemctl disable --now certmonger.service#012 test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:27 localhost python3.9[273640]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None Nov 28 04:44:27 localhost openstack_network_exporter[240973]: ERROR 09:44:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:44:27 localhost openstack_network_exporter[240973]: ERROR 09:44:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:44:27 localhost openstack_network_exporter[240973]: ERROR 09:44:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:44:27 localhost openstack_network_exporter[240973]: ERROR 09:44:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:44:27 localhost openstack_network_exporter[240973]: Nov 28 04:44:27 localhost openstack_network_exporter[240973]: ERROR 09:44:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:44:27 localhost openstack_network_exporter[240973]: Nov 28 04:44:28 localhost python3.9[273750]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None Nov 28 04:44:28 localhost systemd[1]: Reloading. Nov 28 04:44:28 localhost systemd-rc-local-generator[273775]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:44:28 localhost systemd-sysv-generator[273778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:44:28 localhost podman[239012]: time="2025-11-28T09:44:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:44:28 localhost podman[239012]: @ - - [28/Nov/2025:09:44:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:44:28 localhost podman[239012]: @ - - [28/Nov/2025:09:44:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17684 "" "Go-http-client/1.1" Nov 28 04:44:29 localhost python3.9[273896]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:30 localhost python3.9[274007]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:31 localhost python3.9[274118]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:44:31 localhost podman[274229]: 2025-11-28 09:44:31.54890356 +0000 UTC m=+0.086562437 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:44:31 localhost podman[274229]: 2025-11-28 09:44:31.559841519 +0000 UTC m=+0.097500396 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:44:31 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:44:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:44:31 localhost systemd[1]: tmp-crun.FFBhoo.mount: Deactivated successfully. Nov 28 04:44:31 localhost podman[274249]: 2025-11-28 09:44:31.658500878 +0000 UTC m=+0.087937290 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible) Nov 28 04:44:31 localhost python3.9[274230]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:31 localhost podman[274269]: 2025-11-28 09:44:31.723328032 +0000 UTC m=+0.082265494 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 28 04:44:31 localhost podman[274249]: 2025-11-28 09:44:31.741408711 +0000 UTC m=+0.170845163 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 04:44:31 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:44:31 localhost podman[274269]: 2025-11-28 09:44:31.764452724 +0000 UTC m=+0.123390246 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Nov 28 04:44:31 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:44:31 localhost podman[274250]: 2025-11-28 09:44:31.698729452 +0000 UTC m=+0.126215083 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:44:31 localhost podman[274250]: 2025-11-28 09:44:31.833510418 +0000 UTC m=+0.260996089 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:44:31 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:44:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18462 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD0B490000000001030307) Nov 28 04:44:32 localhost python3.9[274423]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:32 localhost python3.9[274534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18463 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD0F3B0000000001030307) Nov 28 04:44:33 localhost python3.9[274645]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:34 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28303 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD12FA0000000001030307) Nov 28 04:44:34 localhost python3.9[274756]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:44:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:44:34 localhost podman[274775]: 2025-11-28 09:44:34.95945728 +0000 UTC m=+0.065476125 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:44:34 localhost podman[274775]: 2025-11-28 09:44:34.996597648 +0000 UTC m=+0.102616433 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:44:35 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:44:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18464 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD173A0000000001030307) Nov 28 04:44:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33663 DF PROTO=TCP SPT=43764 DPT=9102 SEQ=419132269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD1AFB0000000001030307) Nov 28 04:44:37 localhost python3.9[274889]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:37 localhost python3.9[274999]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:38 localhost python3.9[275109]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:39 localhost python3.9[275219]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18465 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD26FA0000000001030307) Nov 28 04:44:40 localhost python3.9[275329]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:41 localhost python3.9[275439]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:44:42 localhost podman[275550]: 2025-11-28 09:44:42.640187766 +0000 UTC m=+0.084817353 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd) Nov 28 04:44:42 localhost podman[275550]: 2025-11-28 09:44:42.680586745 +0000 UTC m=+0.125216332 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true) Nov 28 04:44:42 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:44:42 localhost python3.9[275549]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:43 localhost python3.9[275677]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:43 localhost python3.9[275787]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:44 localhost python3.9[275897]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18466 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD46FA0000000001030307) Nov 28 04:44:50 localhost python3.9[276007]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None Nov 28 04:44:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:44:50.824 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:44:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:44:50.824 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:44:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:44:50.825 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:44:51 localhost sshd[276026]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:44:51 localhost systemd-logind[763]: New session 60 of user zuul. Nov 28 04:44:51 localhost systemd[1]: Started Session 60 of User zuul. Nov 28 04:44:51 localhost systemd[1]: session-60.scope: Deactivated successfully. Nov 28 04:44:51 localhost systemd-logind[763]: Session 60 logged out. Waiting for processes to exit. Nov 28 04:44:51 localhost systemd-logind[763]: Removed session 60. Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.069 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.092 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.117 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.117 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.117 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.118 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.118 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.583 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.812 228501 WARNING nova.virt.libvirt.driver [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.815 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12530MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.815 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:44:52 localhost nova_compute[228497]: 2025-11-28 09:44:52.816 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:44:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:44:52 localhost podman[276069]: 2025-11-28 09:44:52.975901713 +0000 UTC m=+0.083175651 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:44:52 localhost podman[276069]: 2025-11-28 09:44:52.994607692 +0000 UTC m=+0.101881590 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, version=9.6, container_name=openstack_network_exporter) Nov 28 04:44:53 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.025 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.025 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.042 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.476 228501 DEBUG oslo_concurrency.processutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.480 228501 DEBUG nova.compute.provider_tree [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.502 228501 DEBUG nova.scheduler.client.report [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.503 228501 DEBUG nova.compute.resource_tracker [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:44:53 localhost nova_compute[228497]: 2025-11-28 09:44:53.504 228501 DEBUG oslo_concurrency.lockutils [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:44:53 localhost python3.9[276199]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:44:54 localhost python3.9[276287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764323093.104491-3040-151366628889114/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:54 localhost nova_compute[228497]: 2025-11-28 09:44:54.503 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:54 localhost nova_compute[228497]: 2025-11-28 09:44:54.504 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:54 localhost python3.9[276395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:44:55 localhost nova_compute[228497]: 2025-11-28 09:44:55.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:55 localhost python3.9[276450]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:56 localhost nova_compute[228497]: 2025-11-28 09:44:56.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:56 localhost nova_compute[228497]: 2025-11-28 09:44:56.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:44:56 localhost nova_compute[228497]: 2025-11-28 09:44:56.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:44:56 localhost nova_compute[228497]: 2025-11-28 09:44:56.097 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:44:56 localhost python3.9[276558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:44:56 localhost python3.9[276644]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764323095.6960287-3040-276943246522897/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:57 localhost python3.9[276752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:44:57 localhost openstack_network_exporter[240973]: ERROR 09:44:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:44:57 localhost openstack_network_exporter[240973]: ERROR 09:44:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:44:57 localhost openstack_network_exporter[240973]: ERROR 09:44:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:44:57 localhost openstack_network_exporter[240973]: ERROR 09:44:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:44:57 localhost openstack_network_exporter[240973]: Nov 28 04:44:57 localhost openstack_network_exporter[240973]: ERROR 09:44:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:44:57 localhost openstack_network_exporter[240973]: Nov 28 04:44:57 localhost python3.9[276838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764323096.748872-3040-34235935653247/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=ea203e550d6f82354ff814f038f2bcabd98eed86 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:58 localhost nova_compute[228497]: 2025-11-28 09:44:58.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:58 localhost nova_compute[228497]: 2025-11-28 09:44:58.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:44:58 localhost nova_compute[228497]: 2025-11-28 09:44:58.074 228501 DEBUG nova.compute.manager [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:44:58 localhost python3.9[276946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:44:58 localhost podman[239012]: time="2025-11-28T09:44:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:44:58 localhost podman[239012]: @ - - [28/Nov/2025:09:44:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:44:58 localhost python3.9[277032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764323097.92153-3040-97221860165152/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:44:58 localhost podman[239012]: @ - - [28/Nov/2025:09:44:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17699 "" "Go-http-client/1.1" Nov 28 04:44:59 localhost python3.9[277140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:45:00 localhost python3.9[277226]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764323099.0921626-3040-43139872244947/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:45:01 localhost nova_compute[228497]: 2025-11-28 09:45:01.073 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:45:01 localhost nova_compute[228497]: 2025-11-28 09:45:01.074 228501 DEBUG oslo_service.periodic_task [None req-3c98ccaf-3970-4bf7-a4d1-56213c549b35 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:45:01 localhost python3.9[277336]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:45:01 localhost podman[277447]: 2025-11-28 09:45:01.764307389 +0000 UTC m=+0.074719328 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 04:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:45:01 localhost podman[277447]: 2025-11-28 09:45:01.807438248 +0000 UTC m=+0.117850147 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:45:01 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:45:01 localhost python3.9[277446]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:45:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:45:01 localhost podman[277466]: 2025-11-28 09:45:01.900111464 +0000 UTC m=+0.118307472 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0) Nov 28 04:45:01 localhost podman[277466]: 2025-11-28 09:45:01.906465981 +0000 UTC m=+0.124662029 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Nov 28 04:45:01 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:45:01 localhost podman[277482]: 2025-11-28 09:45:01.988519406 +0000 UTC m=+0.085977598 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 04:45:02 localhost podman[277482]: 2025-11-28 09:45:02.042192002 +0000 UTC m=+0.139650174 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 04:45:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58794 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD80790000000001030307) Nov 28 04:45:02 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:45:02 localhost podman[277486]: 2025-11-28 09:45:02.046687461 +0000 UTC m=+0.132317276 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:45:02 localhost podman[277486]: 2025-11-28 09:45:02.126775617 +0000 UTC m=+0.212405432 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:45:02 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:45:02 localhost python3.9[277642]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58795 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD847A0000000001030307) Nov 28 04:45:03 localhost python3.9[277754]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:45:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18467 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD86FA0000000001030307) Nov 28 04:45:04 localhost python3.9[277862]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:04 localhost python3.9[277972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:45:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58796 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD8C7B0000000001030307) Nov 28 04:45:05 localhost python3.9[278027]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:45:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:45:05 localhost podman[278136]: 2025-11-28 09:45:05.960631705 +0000 UTC m=+0.070050154 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:45:05 localhost python3.9[278135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False Nov 28 04:45:05 localhost podman[278136]: 2025-11-28 09:45:05.973469114 +0000 UTC m=+0.082887613 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:45:05 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:45:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28304 DF PROTO=TCP SPT=46354 DPT=9102 SEQ=1223652804 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD90FA0000000001030307) Nov 28 04:45:06 localhost python3.9[278213]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None Nov 28 04:45:07 localhost python3.9[278323]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False Nov 28 04:45:08 localhost python3.9[278433]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:45:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58797 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADD9C3B0000000001030307) Nov 28 04:45:09 localhost python3[278543]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:45:09 localhost python3[278543]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a",#012 "Digest": "sha256:647f1d5dc1b70ffa3e1832199619d57bfaeceac8823ff53ece64b8e42cc9688e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:647f1d5dc1b70ffa3e1832199619d57bfaeceac8823ff53ece64b8e42cc9688e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:36:07.10279245Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211782527,#012 "VirtualSize": 1211782527,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309/diff:/var/lib/containers/storage/overlay/f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a/diff:/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:c136b33417f134a3b932677bcf7a2df089c29f20eca250129eafd2132d4708bb",#012 "sha256:7913bde445307e7f24767d9149b2e7f498930793ac9f073ccec69b608c009d31",#012 "sha256:084b2323a717fe711217b0ec21da61f4804f7a0d506adae935888421b80809cf"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Nov 28 04:45:10 localhost python3.9[278719]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:11 localhost python3.9[278831]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False Nov 28 04:45:12 localhost python3.9[278941]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data Nov 28 04:45:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:45:12 localhost podman[278959]: 2025-11-28 09:45:12.983168495 +0000 UTC m=+0.094489934 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 28 04:45:12 localhost podman[278959]: 2025-11-28 09:45:12.999534443 +0000 UTC m=+0.110855872 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3) Nov 28 04:45:13 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:45:13 localhost python3[279070]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False Nov 28 04:45:13 localhost python3[279070]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012 {#012 "Id": "b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a",#012 "Digest": "sha256:647f1d5dc1b70ffa3e1832199619d57bfaeceac8823ff53ece64b8e42cc9688e",#012 "RepoTags": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"#012 ],#012 "RepoDigests": [#012 "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:647f1d5dc1b70ffa3e1832199619d57bfaeceac8823ff53ece64b8e42cc9688e"#012 ],#012 "Parent": "",#012 "Comment": "",#012 "Created": "2025-11-26T06:36:07.10279245Z",#012 "Config": {#012 "User": "nova",#012 "Env": [#012 "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012 "LANG=en_US.UTF-8",#012 "TZ=UTC",#012 "container=oci"#012 ],#012 "Entrypoint": [#012 "dumb-init",#012 "--single-child",#012 "--"#012 ],#012 "Cmd": [#012 "kolla_start"#012 ],#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "StopSignal": "SIGTERM"#012 },#012 "Version": "",#012 "Author": "",#012 "Architecture": "amd64",#012 "Os": "linux",#012 "Size": 1211782527,#012 "VirtualSize": 1211782527,#012 "GraphDriver": {#012 "Name": "overlay",#012 "Data": {#012 "LowerDir": "/var/lib/containers/storage/overlay/c3914bdda39f47c0c497a56396d11c84b489b87df2bfd019b00ddced1e1ae309/diff:/var/lib/containers/storage/overlay/f20c3ba929bbb53a84e323dddb8c0eaf3ca74b6729310e964e1fa9eee119e43a/diff:/var/lib/containers/storage/overlay/06a1fa74af6494e3f3865876d25e5a11b62fb12ede8164b96bce734f8d084c66/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012 "UpperDir": "/var/lib/containers/storage/overlay/f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449/diff",#012 "WorkDir": "/var/lib/containers/storage/overlay/f7726cecd9e8969401979ecd2369f385c53efc762aea19175eca5dfbffa00449/work"#012 }#012 },#012 "RootFS": {#012 "Type": "layers",#012 "Layers": [#012 "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012 "sha256:1e3477d3ea795ca64b46f28aa9428ba791c4250e0fd05e173a4b9c0fb0bdee23",#012 "sha256:c136b33417f134a3b932677bcf7a2df089c29f20eca250129eafd2132d4708bb",#012 "sha256:7913bde445307e7f24767d9149b2e7f498930793ac9f073ccec69b608c009d31",#012 "sha256:084b2323a717fe711217b0ec21da61f4804f7a0d506adae935888421b80809cf"#012 ]#012 },#012 "Labels": {#012 "io.buildah.version": "1.41.3",#012 "maintainer": "OpenStack Kubernetes Operator team",#012 "org.label-schema.build-date": "20251125",#012 "org.label-schema.license": "GPLv2",#012 "org.label-schema.name": "CentOS Stream 9 Base Image",#012 "org.label-schema.schema-version": "1.0",#012 "org.label-schema.vendor": "CentOS",#012 "tcib_build_tag": "1f5c0439f2433cb462b222a5bb23e629",#012 "tcib_managed": "true"#012 },#012 "Annotations": {},#012 "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012 "User": "nova",#012 "History": [#012 {#012 "created": "2025-11-25T04:02:36.223494528Z",#012 "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:36.223562059Z",#012 "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\" org.label-schema.name=\"CentOS Stream 9 Base Image\" org.label-schema.vendor=\"CentOS\" org.label-schema.license=\"GPLv2\" org.label-schema.build-date=\"20251125\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-25T04:02:39.054452717Z",#012 "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012 },#012 {#012 "created": "2025-11-26T06:10:57.55004106Z",#012 "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012 "comment": "FROM quay.io/centos/centos:stream9",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550061231Z",#012 "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550071761Z",#012 "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550082711Z",#012 "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550094371Z",#012 "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.550104472Z",#012 "created_by": "/bin/sh -c #(nop) USER root",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:10:57.937139683Z",#012 "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012 "empty_layer": true#012 },#012 {#012 "created": "2025-11-26T06:11:33.845342269Z",#012 "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012 "empty_layer": true#012 },#012 {#012 Nov 28 04:45:14 localhost python3.9[279243]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:15 localhost python3.9[279355]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:45:16 localhost python3.9[279464]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764323115.5039833-3717-65271323444404/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:45:16 localhost python3.9[279519]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:45:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58798 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADDBCFB0000000001030307) Nov 28 04:45:17 localhost python3.9[279629]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:18 localhost python3.9[279737]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:19 localhost python3.9[279845]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1 Nov 28 04:45:20 localhost python3.9[279955]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 28 04:45:20 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 102.7 (342 of 333 items), suggesting rotation. Nov 28 04:45:20 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:45:20 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:45:20 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:45:21 localhost python3.9[280089]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None Nov 28 04:45:21 localhost systemd[1]: Stopping nova_compute container... Nov 28 04:45:22 localhost nova_compute[228497]: 2025-11-28 09:45:22.573 228501 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m Nov 28 04:45:22 localhost nova_compute[228497]: 2025-11-28 09:45:22.575 228501 DEBUG oslo_concurrency.lockutils [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 04:45:22 localhost nova_compute[228497]: 2025-11-28 09:45:22.576 228501 DEBUG oslo_concurrency.lockutils [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 04:45:22 localhost nova_compute[228497]: 2025-11-28 09:45:22.576 228501 DEBUG oslo_concurrency.lockutils [None req-8bf541a8-6dc3-40d0-8f78-14c26a1c42e5 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 04:45:22 localhost journal[227736]: End of file while reading data: Input/output error Nov 28 04:45:22 localhost systemd[1]: libpod-1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e.scope: Deactivated successfully. Nov 28 04:45:22 localhost systemd[1]: libpod-1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e.scope: Consumed 17.558s CPU time. Nov 28 04:45:22 localhost podman[280093]: 2025-11-28 09:45:22.948546556 +0000 UTC m=+1.165350710 container died 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125) Nov 28 04:45:22 localhost systemd[1]: tmp-crun.EXMo5n.mount: Deactivated successfully. Nov 28 04:45:22 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e-userdata-shm.mount: Deactivated successfully. Nov 28 04:45:23 localhost podman[280093]: 2025-11-28 09:45:23.08273871 +0000 UTC m=+1.299542784 container cleanup 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:45:23 localhost podman[280093]: nova_compute Nov 28 04:45:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:45:23 localhost podman[280122]: 2025-11-28 09:45:23.201836996 +0000 UTC m=+0.087478496 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9-minimal) Nov 28 04:45:23 localhost podman[280122]: 2025-11-28 09:45:23.217396898 +0000 UTC m=+0.103038438 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 04:45:23 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:45:23 localhost podman[280152]: error opening file `/run/crun/1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e/status`: No such file or directory Nov 28 04:45:23 localhost podman[280123]: 2025-11-28 09:45:23.293047895 +0000 UTC m=+0.175540167 container cleanup 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 04:45:23 localhost podman[280123]: nova_compute Nov 28 04:45:23 localhost systemd[1]: edpm_nova_compute.service: Deactivated successfully. Nov 28 04:45:23 localhost systemd[1]: Stopped nova_compute container. Nov 28 04:45:23 localhost systemd[1]: Starting nova_compute container... Nov 28 04:45:23 localhost systemd[1]: Started libcrun container. Nov 28 04:45:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/etc/nvme supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/etc/multipath supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d070e222432defa9c0fb260246ed4b88067e3e8c5320c077932e5b44f128942/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:23 localhost podman[280154]: 2025-11-28 09:45:23.404563726 +0000 UTC m=+0.086112173 container init 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}) Nov 28 04:45:23 localhost podman[280154]: 2025-11-28 09:45:23.413433721 +0000 UTC m=+0.094982168 container start 1d94fbbb119736b439df2d989b40e3ac469ad8b1f74689adb48cea226c0cf94e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 28 04:45:23 localhost podman[280154]: nova_compute Nov 28 04:45:23 localhost nova_compute[280168]: + sudo -E kolla_set_configs Nov 28 04:45:23 localhost systemd[1]: Started nova_compute container. Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Validating config file Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying service configuration files Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/nova/nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/nova/nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /etc/ceph Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Creating directory /etc/ceph Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/ceph Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /var/lib/nova/.ssh/config Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Deleting /usr/sbin/iscsiadm Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Writing out command to execute Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:45:23 localhost nova_compute[280168]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config Nov 28 04:45:23 localhost nova_compute[280168]: ++ cat /run_command Nov 28 04:45:23 localhost nova_compute[280168]: + CMD=nova-compute Nov 28 04:45:23 localhost nova_compute[280168]: + ARGS= Nov 28 04:45:23 localhost nova_compute[280168]: + sudo kolla_copy_cacerts Nov 28 04:45:23 localhost nova_compute[280168]: + [[ ! -n '' ]] Nov 28 04:45:23 localhost nova_compute[280168]: + . kolla_extend_start Nov 28 04:45:23 localhost nova_compute[280168]: Running command: 'nova-compute' Nov 28 04:45:23 localhost nova_compute[280168]: + echo 'Running command: '\''nova-compute'\''' Nov 28 04:45:23 localhost nova_compute[280168]: + umask 0022 Nov 28 04:45:23 localhost nova_compute[280168]: + exec nova-compute Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.098 280172 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.098 280172 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.098 280172 DEBUG os_vif [-] Loaded VIF plugin class '' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.098 280172 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.209 280172 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.229 280172 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.230 280172 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.665 280172 INFO nova.virt.driver [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.778 280172 INFO nova.compute.provider_config [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.787 280172 DEBUG oslo_concurrency.lockutils [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.787 280172 DEBUG oslo_concurrency.lockutils [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.788 280172 DEBUG oslo_concurrency.lockutils [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.788 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.788 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.788 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.788 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.788 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] allow_resize_to_same_host = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] arq_binding_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] backdoor_port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] backdoor_socket = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] block_device_allocate_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.789 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cert = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute_driver = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute_monitors = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] config_dir = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] config_drive_format = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] config_file = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.790 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] config_source = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] console_host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] control_exchange = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cpu_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] debug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.791 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] default_availability_zone = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.792 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] default_ephemeral_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.792 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.793 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] default_schedule_zone = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.793 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] disk_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.793 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] enable_new_services = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.794 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] enabled_apis = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.794 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] enabled_ssl_apis = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.795 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] flat_injected = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.795 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] force_config_drive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.795 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] force_raw_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.796 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] graceful_shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.796 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.796 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] host = np0005538515.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.796 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] initial_cpu_allocation_ratio = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.796 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] initial_disk_allocation_ratio = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.797 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] initial_ram_allocation_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.797 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] injected_network_template = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.797 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_build_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.797 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_delete_interval = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.797 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.797 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_name_template = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_usage_audit = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_usage_audit_period = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instance_uuid_format = [instance: %(uuid)s] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] instances_path = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.798 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] live_migration_retry_count = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_config_append = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_date_format = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_options = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_rotate_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.799 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_rotate_interval_type = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] log_rotation_type = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] long_rpc_timeout = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.800 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] max_concurrent_builds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] max_concurrent_snapshots = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] max_local_block_devices = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] max_logfile_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] max_logfile_size_mb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.801 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metadata_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metadata_listen_port = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metadata_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] migrate_max_retries = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] mkisofs_cmd = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] my_block_storage_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] my_ip = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.802 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] network_allocate_retries = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] osapi_compute_listen = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] osapi_compute_listen_port = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] osapi_compute_unique_server_name_scope = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] osapi_compute_workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] password_length = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] periodic_enable = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.803 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] periodic_fuzzy_delay = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] pointer_model = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] preallocate_images = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] publish_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] pybasedir = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ram_allocation_ratio = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rate_limit_burst = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.804 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rate_limit_except_level = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rate_limit_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reboot_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reclaim_instance_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] record = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reimage_timeout_per_gb = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] report_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rescue_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.805 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reserved_host_cpus = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reserved_host_disk_mb = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reserved_host_memory_mb = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] reserved_huge_pages = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] resize_confirm_window = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] resize_fs_using_block_device = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.806 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rootwrap_config = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rpc_response_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] run_external_periodic_tasks = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.807 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_down_time = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] servicegroup_driver = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] shelved_offload_time = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] shelved_poll_interval = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] shutdown_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] source_is_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ssl_only = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] state_path = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.808 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] sync_power_state_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] sync_power_state_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] syslog_log_facility = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] tempdir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] timeout_nbd = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] update_resources_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.809 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_cow_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_eventlog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_journal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_json = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_rootwrap_daemon = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_stderr = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] use_syslog = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vcpu_pin_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.810 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plugging_is_fatal = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.811 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plugging_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.811 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] virt_mkfs = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.811 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] volume_usage_poll_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.811 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] watch_log_file = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.811 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] web = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_concurrency.lock_path = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_metrics.metrics_process_name = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.812 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.auth_strategy = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.compute_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.dhcp_domain = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.enable_instance_password = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.glance_link_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.813 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.local_metadata_per_cell = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.max_limit = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.metadata_cache_expiration = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.814 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.neutron_default_tenant_id = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.use_forwarded_for = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.use_neutron_default_nets = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_dynamic_ssl_certfile = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.815 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_jsonfile_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api.vendordata_providers = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.backend = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.backend_argument = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.config_prefix = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.dead_timeout = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.816 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.debug_cache_backend = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.enable_retry_client = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.enable_socket_keepalive = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.expiration_time = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.hashclient_retry_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.817 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_dead_retry = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_password = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_pool_maxsize = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_sasl_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.818 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_servers = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_socket_timeout = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.memcache_username = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.proxies = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.retry_delay = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.socket_keepalive_count = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.819 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.socket_keepalive_idle = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.tls_allowed_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.tls_cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.tls_certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.tls_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cache.tls_keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.820 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.catalog_info = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.cross_az_attach = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.821 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.endpoint_template = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.os_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cinder.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.822 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.cpu_dedicated_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.cpu_shared_set = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.823 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] compute.vmdk_allowed_types = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] conductor.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] console.allowed_origins = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.824 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] console.ssl_ciphers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] console.ssl_minimum_version = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] consoleauth.token_ttl = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.825 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.826 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.service_type = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] cyborg.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.827 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.828 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.829 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.backend = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.830 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.connection_debug = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.connection_parameters = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.connection_trace = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.db_max_retries = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.831 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.max_overflow = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.max_pool_size = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.max_retries = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.mysql_enable_ndb = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.mysql_sql_mode = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.832 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.pool_timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.slave_connection = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] devices.enabled_mdev_types = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.833 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.api_servers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.834 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.enable_rbd_download = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.835 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.num_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.rbd_ceph_conf = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.rbd_pool = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.rbd_user = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.836 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.service_type = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.837 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] glance.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.838 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] guestfs.debug = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.838 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.config_drive_cdrom = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.838 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.838 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.dynamic_memory_ratio = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.838 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.838 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.enable_remotefx = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.instances_path_share = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.iscsi_initiator_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.limit_cpu_features = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.839 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.qemu_img_cmd = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.use_multipath_io = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.vswitch_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.840 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] mks.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.841 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] mks.mksproxy_base_url = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.841 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] image_cache.manager_interval = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.841 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.841 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.841 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.841 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] image_cache.subdirectory_name = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.api_max_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.api_retry_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.842 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.843 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.843 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.843 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.843 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.843 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.partition_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.peer_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.844 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.service_type = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ironic.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.845 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] key_manager.backend = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] key_manager.fixed_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.auth_endpoint = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.barbican_api_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.barbican_endpoint = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.barbican_region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.846 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.number_of_retries = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.retry_delay = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.847 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.verify_ssl = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican.verify_ssl_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.848 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] barbican_service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.approle_role_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.849 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.approle_secret_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.kv_mountpoint = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.kv_version = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.850 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.root_token_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.use_ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vault.vault_url = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.851 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.852 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.service_type = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.853 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.valid_interfaces = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] keystone.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.connection_uri = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_mode = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_model_extra_flags = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.854 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_models = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_power_management = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.device_detach_timeout = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.855 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.disk_cachemodes = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.disk_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.enabled_perf_events = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.file_backed_memory = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.gid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.hw_disk_discard = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.hw_machine_type = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.856 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_rbd_ceph_conf = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_rbd_pool = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_type = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.images_volume_group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.inject_key = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.857 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.inject_partition = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.858 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.858 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.iscsi_iface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.858 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.iser_use_multipath = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.858 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.858 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.858 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_scheme = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.859 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.860 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.860 280172 WARNING oslo_config.cfg [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal ( Nov 28 04:45:25 localhost nova_compute[280168]: live_migration_uri is deprecated for removal in favor of two other options that Nov 28 04:45:25 localhost nova_compute[280168]: allow to change live migration scheme and target URI: ``live_migration_scheme`` Nov 28 04:45:25 localhost nova_compute[280168]: and ``live_migration_inbound_addr`` respectively. Nov 28 04:45:25 localhost nova_compute[280168]: ). Its value may be silently ignored in the future.#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.860 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_uri = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.860 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.860 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.max_queues = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.860 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.nfs_mount_options = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.nfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.num_iser_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.861 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.num_pcie_ports = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.num_volume_scan_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.pmem_namespaces = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.quobyte_client_cfg = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rbd_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.862 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.863 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rbd_secret_uuid = 2c5417c9-00eb-57d5-a565-ddecbc7995c1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.863 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rbd_user = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.863 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.863 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.863 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rescue_image_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.863 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rescue_kernel_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rescue_ramdisk_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rng_dev_path = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.rx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.smbfs_mount_options = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.snapshot_compression = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.864 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.snapshot_image_format = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.snapshots_directory = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.swtpm_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.swtpm_group = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.swtpm_user = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.sysinfo_serial = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.865 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.tx_queue_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.uid_maps = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.virt_type = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.volume_clear = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.volume_clear_size = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.volume_use_multipath = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.866 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_cache_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.867 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.867 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_mount_group = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.867 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_mount_opts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.867 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_mount_perms = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.867 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.867 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.vzstorage_mount_user = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.868 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.default_floating_pool = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.http_retries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.869 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.ovs_bridge = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.physnets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.870 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.service_type = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] neutron.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.871 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] notifications.default_level = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] pci.alias = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] pci.device_spec = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.872 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] pci.report_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.873 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.874 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.875 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.region_name = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.service_type = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.876 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.valid_interfaces = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.877 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] placement.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.cores = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.driver = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.injected_files = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.878 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.instances = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.key_pairs = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.metadata_items = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.ram = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.recheck_quota = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.server_group_members = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] quota.server_groups = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.879 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rdp.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.880 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] rdp.html5_proxy_base_url = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.880 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.880 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.880 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.880 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.880 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.max_attempts = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] scheduler.workers = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.881 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.882 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.883 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.884 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.885 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.885 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metrics.required = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.885 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metrics.weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.885 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metrics.weight_of_unavailable = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.885 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] metrics.weight_setting = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.885 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] serial_console.base_url = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] serial_console.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] serial_console.port_range = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.886 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.887 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] service_user.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.agent_enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.html5proxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.html5proxy_port = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.888 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.image_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.jpeg_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.playback_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.server_listen = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.streaming_mode = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] spice.zlib_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.889 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] upgrade_levels.baseapi = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] upgrade_levels.cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] upgrade_levels.compute = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] upgrade_levels.conductor = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] upgrade_levels.scheduler = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.890 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.api_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.891 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.cache_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.cluster_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.connection_pool_size = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.console_delay_seconds = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.datastore_regex = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.host_ip = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.892 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.host_password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.host_port = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.host_username = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.integration_bridge = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.maximum_objects = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.pbm_default_policy = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.pbm_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.893 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.pbm_wsdl_location = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.serial_log_dir = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.serial_port_proxy_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.task_poll_interval = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.use_linked_clone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.vnc_keymap = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.894 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.vnc_port = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.895 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vmware.vnc_port_total = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.895 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.auth_schemes = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.895 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.enabled = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.895 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.novncproxy_base_url = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.895 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.novncproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.895 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.novncproxy_port = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.server_listen = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.vencrypt_ca_certs = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.vencrypt_client_cert = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vnc.vencrypt_client_key = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.896 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.disable_rootwrap = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.897 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.898 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.api_paste_config = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.client_socket_timeout = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.899 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.default_pool_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.keep_alive = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.max_header_line = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.secure_proxy_ssl_header = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.ssl_ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.ssl_cert_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.ssl_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.900 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.tcp_keepidle = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] wsgi.wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] zvm.ca_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] zvm.cloud_connector_url = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] zvm.image_tmp_path = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] zvm.reachable_timeout = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.901 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.enforce_scope = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.policy_dirs = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.policy_file = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.902 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] remote_debug.host = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] remote_debug.port = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.903 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.904 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.905 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.906 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.ssl = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.ssl_ca_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.ssl_cert_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.ssl_key_file = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.907 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_rabbit.ssl_version = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.auth_type = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.908 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.auth_url = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.connect_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.909 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.endpoint_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.endpoint_override = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.max_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.910 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.min_version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.password = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.project_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.service_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.911 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.service_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.system_scope = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.trust_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.912 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.user_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.username = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.valid_interfaces = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_limit.version = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.913 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] oslo_reports.log_dir = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.914 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_ovs_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] vif_plug_ovs_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.915 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.iptables_top_regex = log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.use_ipv6 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.916 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_ovs.isolate_vif = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_ovs.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_ovs.ovs_vsctl_timeout = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_ovs.ovsdb_connection = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_ovs.ovsdb_interface = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_vif_ovs.per_port_bridge = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_brick.lock_path = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.917 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] privsep_osbrick.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] privsep_osbrick.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] privsep_osbrick.logger_name = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] privsep_osbrick.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.918 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] nova_sys_admin.capabilities = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.919 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] nova_sys_admin.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.919 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] nova_sys_admin.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.919 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] nova_sys_admin.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.919 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.919 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] nova_sys_admin.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.919 280172 DEBUG oslo_service.service [None req-88da137d-e844-41ab-ba2e-65531aa98673 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.920 280172 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.941 280172 INFO nova.virt.node [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Determined node identity 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from /var/lib/nova/compute_id#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.941 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.942 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.942 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.942 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.951 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Registering for lifecycle events _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.954 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Registering for connection events: _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.954 280172 INFO nova.virt.libvirt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Connection event '1' reason 'None'#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.961 280172 INFO nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Libvirt host capabilities Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: 4c358f0e-7e15-44e5-bde2-714780d05a92 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: x86_64 Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome-v4 Nov 28 04:45:25 localhost nova_compute[280168]: AMD Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: tcp Nov 28 04:45:25 localhost nova_compute[280168]: rdma Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: 16116612 Nov 28 04:45:25 localhost nova_compute[280168]: 4029153 Nov 28 04:45:25 localhost nova_compute[280168]: 0 Nov 28 04:45:25 localhost nova_compute[280168]: 0 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: selinux Nov 28 04:45:25 localhost nova_compute[280168]: 0 Nov 28 04:45:25 localhost nova_compute[280168]: system_u:system_r:svirt_t:s0 Nov 28 04:45:25 localhost nova_compute[280168]: system_u:system_r:svirt_tcg_t:s0 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: dac Nov 28 04:45:25 localhost nova_compute[280168]: 0 Nov 28 04:45:25 localhost nova_compute[280168]: +107:+107 Nov 28 04:45:25 localhost nova_compute[280168]: +107:+107 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: hvm Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: 32 Nov 28 04:45:25 localhost nova_compute[280168]: /usr/libexec/qemu-kvm Nov 28 04:45:25 localhost nova_compute[280168]: pc-i440fx-rhel7.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.8.0 Nov 28 04:45:25 localhost nova_compute[280168]: q35 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.4.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.5.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.3.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel7.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.4.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.2.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.2.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.0.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.0.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.1.0 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: hvm Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: 64 Nov 28 04:45:25 localhost nova_compute[280168]: /usr/libexec/qemu-kvm Nov 28 04:45:25 localhost nova_compute[280168]: pc-i440fx-rhel7.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.8.0 Nov 28 04:45:25 localhost nova_compute[280168]: q35 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.4.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.5.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.3.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel7.6.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.4.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.2.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.2.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.0.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.0.0 Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel8.1.0 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: #033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.971 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.973 280172 DEBUG nova.virt.libvirt.volume.mount [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m Nov 28 04:45:25 localhost nova_compute[280168]: 2025-11-28 09:45:25.974 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: /usr/libexec/qemu-kvm Nov 28 04:45:25 localhost nova_compute[280168]: kvm Nov 28 04:45:25 localhost nova_compute[280168]: pc-q35-rhel9.8.0 Nov 28 04:45:25 localhost nova_compute[280168]: i686 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: rom Nov 28 04:45:25 localhost nova_compute[280168]: pflash Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: yes Nov 28 04:45:25 localhost nova_compute[280168]: no Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: no Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: on Nov 28 04:45:25 localhost nova_compute[280168]: off Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: on Nov 28 04:45:25 localhost nova_compute[280168]: off Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:25 localhost nova_compute[280168]: AMD Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: 486 Nov 28 04:45:25 localhost nova_compute[280168]: 486-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-IBRS Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-noTSX Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-noTSX-IBRS Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-v3 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Broadwell-v4 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server-noTSX Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server-v3 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server-v4 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cascadelake-Server-v5 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Conroe Nov 28 04:45:25 localhost nova_compute[280168]: Conroe-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Cooperlake Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cooperlake-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Cooperlake-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Denverton Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Denverton-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Denverton-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Denverton-v3 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Dhyana Nov 28 04:45:25 localhost nova_compute[280168]: Dhyana-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Dhyana-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Genoa Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Genoa-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-IBPB Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Milan Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Milan-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Milan-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome-v3 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-Rome-v4 Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-v1 Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-v2 Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-v3 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: EPYC-v4 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: GraniteRapids Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: GraniteRapids-v1 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: GraniteRapids-v2 Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Haswell Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:25 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v6 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v7 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Penryn Nov 28 04:45:26 localhost nova_compute[280168]: Penryn-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Westmere Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v2 Nov 28 04:45:26 localhost nova_compute[280168]: athlon Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: athlon-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: kvm32 Nov 28 04:45:26 localhost nova_compute[280168]: kvm32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: n270 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: n270-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pentium Nov 28 04:45:26 localhost nova_compute[280168]: pentium-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: phenom Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: phenom-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu32 Nov 28 04:45:26 localhost nova_compute[280168]: qemu32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: anonymous Nov 28 04:45:26 localhost nova_compute[280168]: memfd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: disk Nov 28 04:45:26 localhost nova_compute[280168]: cdrom Nov 28 04:45:26 localhost nova_compute[280168]: floppy Nov 28 04:45:26 localhost nova_compute[280168]: lun Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: fdc Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: sata Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: vnc Nov 28 04:45:26 localhost nova_compute[280168]: egl-headless Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: subsystem Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: mandatory Nov 28 04:45:26 localhost nova_compute[280168]: requisite Nov 28 04:45:26 localhost nova_compute[280168]: optional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: pci Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: random Nov 28 04:45:26 localhost nova_compute[280168]: egd Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: path Nov 28 04:45:26 localhost nova_compute[280168]: handle Nov 28 04:45:26 localhost nova_compute[280168]: virtiofs Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tpm-tis Nov 28 04:45:26 localhost nova_compute[280168]: tpm-crb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: emulator Nov 28 04:45:26 localhost nova_compute[280168]: external Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 2.0 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: passt Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: isa Nov 28 04:45:26 localhost nova_compute[280168]: hyperv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: null Nov 28 04:45:26 localhost nova_compute[280168]: vc Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: dev Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: pipe Nov 28 04:45:26 localhost nova_compute[280168]: stdio Nov 28 04:45:26 localhost nova_compute[280168]: udp Nov 28 04:45:26 localhost nova_compute[280168]: tcp Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: qemu-vdagent Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: relaxed Nov 28 04:45:26 localhost nova_compute[280168]: vapic Nov 28 04:45:26 localhost nova_compute[280168]: spinlocks Nov 28 04:45:26 localhost nova_compute[280168]: vpindex Nov 28 04:45:26 localhost nova_compute[280168]: runtime Nov 28 04:45:26 localhost nova_compute[280168]: synic Nov 28 04:45:26 localhost nova_compute[280168]: stimer Nov 28 04:45:26 localhost nova_compute[280168]: reset Nov 28 04:45:26 localhost nova_compute[280168]: vendor_id Nov 28 04:45:26 localhost nova_compute[280168]: frequencies Nov 28 04:45:26 localhost nova_compute[280168]: reenlightenment Nov 28 04:45:26 localhost nova_compute[280168]: tlbflush Nov 28 04:45:26 localhost nova_compute[280168]: ipi Nov 28 04:45:26 localhost nova_compute[280168]: avic Nov 28 04:45:26 localhost nova_compute[280168]: emsr_bitmap Nov 28 04:45:26 localhost nova_compute[280168]: xmm_input Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 4095 Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Linux KVM Hv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tdx Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:25.979 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: /usr/libexec/qemu-kvm Nov 28 04:45:26 localhost nova_compute[280168]: kvm Nov 28 04:45:26 localhost nova_compute[280168]: pc-i440fx-rhel7.6.0 Nov 28 04:45:26 localhost nova_compute[280168]: i686 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: rom Nov 28 04:45:26 localhost nova_compute[280168]: pflash Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: yes Nov 28 04:45:26 localhost nova_compute[280168]: no Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: no Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:26 localhost nova_compute[280168]: AMD Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 486 Nov 28 04:45:26 localhost nova_compute[280168]: 486-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Conroe Nov 28 04:45:26 localhost nova_compute[280168]: Conroe-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Genoa Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Genoa-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-IBPB Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v4 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v1 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v2 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v6 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v7 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Penryn Nov 28 04:45:26 localhost nova_compute[280168]: Penryn-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Westmere Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v2 Nov 28 04:45:26 localhost nova_compute[280168]: athlon Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: athlon-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: kvm32 Nov 28 04:45:26 localhost nova_compute[280168]: kvm32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: n270 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: n270-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pentium Nov 28 04:45:26 localhost nova_compute[280168]: pentium-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: phenom Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: phenom-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu32 Nov 28 04:45:26 localhost nova_compute[280168]: qemu32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: anonymous Nov 28 04:45:26 localhost nova_compute[280168]: memfd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: disk Nov 28 04:45:26 localhost nova_compute[280168]: cdrom Nov 28 04:45:26 localhost nova_compute[280168]: floppy Nov 28 04:45:26 localhost nova_compute[280168]: lun Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: ide Nov 28 04:45:26 localhost nova_compute[280168]: fdc Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: sata Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: vnc Nov 28 04:45:26 localhost nova_compute[280168]: egl-headless Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: subsystem Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: mandatory Nov 28 04:45:26 localhost nova_compute[280168]: requisite Nov 28 04:45:26 localhost nova_compute[280168]: optional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: pci Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: random Nov 28 04:45:26 localhost nova_compute[280168]: egd Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: path Nov 28 04:45:26 localhost nova_compute[280168]: handle Nov 28 04:45:26 localhost nova_compute[280168]: virtiofs Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tpm-tis Nov 28 04:45:26 localhost nova_compute[280168]: tpm-crb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: emulator Nov 28 04:45:26 localhost nova_compute[280168]: external Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 2.0 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: passt Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: isa Nov 28 04:45:26 localhost nova_compute[280168]: hyperv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: null Nov 28 04:45:26 localhost nova_compute[280168]: vc Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: dev Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: pipe Nov 28 04:45:26 localhost nova_compute[280168]: stdio Nov 28 04:45:26 localhost nova_compute[280168]: udp Nov 28 04:45:26 localhost nova_compute[280168]: tcp Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: qemu-vdagent Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: relaxed Nov 28 04:45:26 localhost nova_compute[280168]: vapic Nov 28 04:45:26 localhost nova_compute[280168]: spinlocks Nov 28 04:45:26 localhost nova_compute[280168]: vpindex Nov 28 04:45:26 localhost nova_compute[280168]: runtime Nov 28 04:45:26 localhost nova_compute[280168]: synic Nov 28 04:45:26 localhost nova_compute[280168]: stimer Nov 28 04:45:26 localhost nova_compute[280168]: reset Nov 28 04:45:26 localhost nova_compute[280168]: vendor_id Nov 28 04:45:26 localhost nova_compute[280168]: frequencies Nov 28 04:45:26 localhost nova_compute[280168]: reenlightenment Nov 28 04:45:26 localhost nova_compute[280168]: tlbflush Nov 28 04:45:26 localhost nova_compute[280168]: ipi Nov 28 04:45:26 localhost nova_compute[280168]: avic Nov 28 04:45:26 localhost nova_compute[280168]: emsr_bitmap Nov 28 04:45:26 localhost nova_compute[280168]: xmm_input Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 4095 Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Linux KVM Hv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tdx Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.011 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.016 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: /usr/libexec/qemu-kvm Nov 28 04:45:26 localhost nova_compute[280168]: kvm Nov 28 04:45:26 localhost nova_compute[280168]: pc-q35-rhel9.8.0 Nov 28 04:45:26 localhost nova_compute[280168]: x86_64 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: efi Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: /usr/share/edk2/ovmf/OVMF_CODE.secboot.fd Nov 28 04:45:26 localhost nova_compute[280168]: /usr/share/edk2/ovmf/OVMF_CODE.fd Nov 28 04:45:26 localhost nova_compute[280168]: /usr/share/edk2/ovmf/OVMF.amdsev.fd Nov 28 04:45:26 localhost nova_compute[280168]: /usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: rom Nov 28 04:45:26 localhost nova_compute[280168]: pflash Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: yes Nov 28 04:45:26 localhost nova_compute[280168]: no Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: yes Nov 28 04:45:26 localhost nova_compute[280168]: no Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:26 localhost nova_compute[280168]: AMD Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 486 Nov 28 04:45:26 localhost nova_compute[280168]: 486-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Conroe Nov 28 04:45:26 localhost nova_compute[280168]: Conroe-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Genoa Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Genoa-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-IBPB Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v4 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v1 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v2 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v6 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v7 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Penryn Nov 28 04:45:26 localhost nova_compute[280168]: Penryn-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Westmere Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v2 Nov 28 04:45:26 localhost nova_compute[280168]: athlon Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: athlon-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: kvm32 Nov 28 04:45:26 localhost nova_compute[280168]: kvm32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: n270 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: n270-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pentium Nov 28 04:45:26 localhost nova_compute[280168]: pentium-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: phenom Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: phenom-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu32 Nov 28 04:45:26 localhost nova_compute[280168]: qemu32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: anonymous Nov 28 04:45:26 localhost nova_compute[280168]: memfd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: disk Nov 28 04:45:26 localhost nova_compute[280168]: cdrom Nov 28 04:45:26 localhost nova_compute[280168]: floppy Nov 28 04:45:26 localhost nova_compute[280168]: lun Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: fdc Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: sata Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: vnc Nov 28 04:45:26 localhost nova_compute[280168]: egl-headless Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: subsystem Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: mandatory Nov 28 04:45:26 localhost nova_compute[280168]: requisite Nov 28 04:45:26 localhost nova_compute[280168]: optional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: pci Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: random Nov 28 04:45:26 localhost nova_compute[280168]: egd Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: path Nov 28 04:45:26 localhost nova_compute[280168]: handle Nov 28 04:45:26 localhost nova_compute[280168]: virtiofs Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tpm-tis Nov 28 04:45:26 localhost nova_compute[280168]: tpm-crb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: emulator Nov 28 04:45:26 localhost nova_compute[280168]: external Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 2.0 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: passt Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: isa Nov 28 04:45:26 localhost nova_compute[280168]: hyperv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: null Nov 28 04:45:26 localhost nova_compute[280168]: vc Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: dev Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: pipe Nov 28 04:45:26 localhost nova_compute[280168]: stdio Nov 28 04:45:26 localhost nova_compute[280168]: udp Nov 28 04:45:26 localhost nova_compute[280168]: tcp Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: qemu-vdagent Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: relaxed Nov 28 04:45:26 localhost nova_compute[280168]: vapic Nov 28 04:45:26 localhost nova_compute[280168]: spinlocks Nov 28 04:45:26 localhost nova_compute[280168]: vpindex Nov 28 04:45:26 localhost nova_compute[280168]: runtime Nov 28 04:45:26 localhost nova_compute[280168]: synic Nov 28 04:45:26 localhost nova_compute[280168]: stimer Nov 28 04:45:26 localhost nova_compute[280168]: reset Nov 28 04:45:26 localhost nova_compute[280168]: vendor_id Nov 28 04:45:26 localhost nova_compute[280168]: frequencies Nov 28 04:45:26 localhost nova_compute[280168]: reenlightenment Nov 28 04:45:26 localhost nova_compute[280168]: tlbflush Nov 28 04:45:26 localhost nova_compute[280168]: ipi Nov 28 04:45:26 localhost nova_compute[280168]: avic Nov 28 04:45:26 localhost nova_compute[280168]: emsr_bitmap Nov 28 04:45:26 localhost nova_compute[280168]: xmm_input Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 4095 Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Linux KVM Hv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tdx Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.068 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: /usr/libexec/qemu-kvm Nov 28 04:45:26 localhost nova_compute[280168]: kvm Nov 28 04:45:26 localhost nova_compute[280168]: pc-i440fx-rhel7.6.0 Nov 28 04:45:26 localhost nova_compute[280168]: x86_64 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: /usr/share/OVMF/OVMF_CODE.secboot.fd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: rom Nov 28 04:45:26 localhost nova_compute[280168]: pflash Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: yes Nov 28 04:45:26 localhost nova_compute[280168]: no Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: no Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:26 localhost nova_compute[280168]: AMD Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 486 Nov 28 04:45:26 localhost nova_compute[280168]: 486-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Broadwell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cascadelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Conroe Nov 28 04:45:26 localhost nova_compute[280168]: Conroe-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Cooperlake-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Denverton-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Dhyana-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Genoa Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Genoa-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-IBPB Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Milan-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-Rome-v4 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v1 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v2 Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: EPYC-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: GraniteRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Haswell-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-noTSX Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v6 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Icelake-Server-v7 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: IvyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: KnightsMill-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nehalem-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G1-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G4-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Opteron_G5-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Penryn Nov 28 04:45:26 localhost nova_compute[280168]: Penryn-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: SandyBridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SapphireRapids-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: SierraForest-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Client-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-noTSX-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Skylake-Server-v5 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v2 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v3 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Snowridge-v4 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Westmere Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-IBRS Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Westmere-v2 Nov 28 04:45:26 localhost nova_compute[280168]: athlon Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: athlon-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: core2duo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: coreduo-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: kvm32 Nov 28 04:45:26 localhost nova_compute[280168]: kvm32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64 Nov 28 04:45:26 localhost nova_compute[280168]: kvm64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: n270 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: n270-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pentium Nov 28 04:45:26 localhost nova_compute[280168]: pentium-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2 Nov 28 04:45:26 localhost nova_compute[280168]: pentium2-v1 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3 Nov 28 04:45:26 localhost nova_compute[280168]: pentium3-v1 Nov 28 04:45:26 localhost nova_compute[280168]: phenom Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: phenom-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu32 Nov 28 04:45:26 localhost nova_compute[280168]: qemu32-v1 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64 Nov 28 04:45:26 localhost nova_compute[280168]: qemu64-v1 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: anonymous Nov 28 04:45:26 localhost nova_compute[280168]: memfd Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: disk Nov 28 04:45:26 localhost nova_compute[280168]: cdrom Nov 28 04:45:26 localhost nova_compute[280168]: floppy Nov 28 04:45:26 localhost nova_compute[280168]: lun Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: ide Nov 28 04:45:26 localhost nova_compute[280168]: fdc Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: sata Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: vnc Nov 28 04:45:26 localhost nova_compute[280168]: egl-headless Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: subsystem Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: mandatory Nov 28 04:45:26 localhost nova_compute[280168]: requisite Nov 28 04:45:26 localhost nova_compute[280168]: optional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: pci Nov 28 04:45:26 localhost nova_compute[280168]: scsi Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: virtio Nov 28 04:45:26 localhost nova_compute[280168]: virtio-transitional Nov 28 04:45:26 localhost nova_compute[280168]: virtio-non-transitional Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: random Nov 28 04:45:26 localhost nova_compute[280168]: egd Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: path Nov 28 04:45:26 localhost nova_compute[280168]: handle Nov 28 04:45:26 localhost nova_compute[280168]: virtiofs Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tpm-tis Nov 28 04:45:26 localhost nova_compute[280168]: tpm-crb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: emulator Nov 28 04:45:26 localhost nova_compute[280168]: external Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 2.0 Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: usb Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: qemu Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: builtin Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: default Nov 28 04:45:26 localhost nova_compute[280168]: passt Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: isa Nov 28 04:45:26 localhost nova_compute[280168]: hyperv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: null Nov 28 04:45:26 localhost nova_compute[280168]: vc Nov 28 04:45:26 localhost nova_compute[280168]: pty Nov 28 04:45:26 localhost nova_compute[280168]: dev Nov 28 04:45:26 localhost nova_compute[280168]: file Nov 28 04:45:26 localhost nova_compute[280168]: pipe Nov 28 04:45:26 localhost nova_compute[280168]: stdio Nov 28 04:45:26 localhost nova_compute[280168]: udp Nov 28 04:45:26 localhost nova_compute[280168]: tcp Nov 28 04:45:26 localhost nova_compute[280168]: unix Nov 28 04:45:26 localhost nova_compute[280168]: qemu-vdagent Nov 28 04:45:26 localhost nova_compute[280168]: dbus Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: relaxed Nov 28 04:45:26 localhost nova_compute[280168]: vapic Nov 28 04:45:26 localhost nova_compute[280168]: spinlocks Nov 28 04:45:26 localhost nova_compute[280168]: vpindex Nov 28 04:45:26 localhost nova_compute[280168]: runtime Nov 28 04:45:26 localhost nova_compute[280168]: synic Nov 28 04:45:26 localhost nova_compute[280168]: stimer Nov 28 04:45:26 localhost nova_compute[280168]: reset Nov 28 04:45:26 localhost nova_compute[280168]: vendor_id Nov 28 04:45:26 localhost nova_compute[280168]: frequencies Nov 28 04:45:26 localhost nova_compute[280168]: reenlightenment Nov 28 04:45:26 localhost nova_compute[280168]: tlbflush Nov 28 04:45:26 localhost nova_compute[280168]: ipi Nov 28 04:45:26 localhost nova_compute[280168]: avic Nov 28 04:45:26 localhost nova_compute[280168]: emsr_bitmap Nov 28 04:45:26 localhost nova_compute[280168]: xmm_input Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: 4095 Nov 28 04:45:26 localhost nova_compute[280168]: on Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: off Nov 28 04:45:26 localhost nova_compute[280168]: Linux KVM Hv Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: tdx Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: Nov 28 04:45:26 localhost nova_compute[280168]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.119 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.120 280172 INFO nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Secure Boot support detected#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.122 280172 INFO nova.virt.libvirt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.123 280172 INFO nova.virt.libvirt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.133 280172 DEBUG nova.virt.libvirt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.155 280172 INFO nova.virt.node [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Determined node identity 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from /var/lib/nova/compute_id#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.175 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Verified node 72fba1ca-0d86-48af-8a3d-510284dfd0e0 matches my host np0005538515.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.217 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.338 280172 DEBUG oslo_concurrency.lockutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.339 280172 DEBUG oslo_concurrency.lockutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.339 280172 DEBUG oslo_concurrency.lockutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.339 280172 DEBUG nova.compute.resource_tracker [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.339 280172 DEBUG oslo_concurrency.processutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.777 280172 DEBUG oslo_concurrency.processutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.906 280172 WARNING nova.virt.libvirt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.907 280172 DEBUG nova.compute.resource_tracker [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12519MB free_disk=41.837093353271484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.908 280172 DEBUG oslo_concurrency.lockutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:45:26 localhost nova_compute[280168]: 2025-11-28 09:45:26.908 280172 DEBUG oslo_concurrency.lockutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.055 280172 DEBUG nova.compute.resource_tracker [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.055 280172 DEBUG nova.compute.resource_tracker [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.120 280172 DEBUG nova.scheduler.client.report [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.142 280172 DEBUG nova.scheduler.client.report [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.142 280172 DEBUG nova.compute.provider_tree [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.156 280172 DEBUG nova.scheduler.client.report [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.179 280172 DEBUG nova.scheduler.client.report [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.196 280172 DEBUG oslo_concurrency.processutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:45:27 localhost openstack_network_exporter[240973]: ERROR 09:45:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:45:27 localhost openstack_network_exporter[240973]: ERROR 09:45:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:45:27 localhost openstack_network_exporter[240973]: ERROR 09:45:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:45:27 localhost openstack_network_exporter[240973]: ERROR 09:45:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:45:27 localhost openstack_network_exporter[240973]: Nov 28 04:45:27 localhost openstack_network_exporter[240973]: ERROR 09:45:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:45:27 localhost openstack_network_exporter[240973]: Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.656 280172 DEBUG oslo_concurrency.processutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.662 280172 DEBUG nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N Nov 28 04:45:27 localhost nova_compute[280168]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.663 280172 INFO nova.virt.libvirt.host [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] kernel doesn't support AMD SEV#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.664 280172 DEBUG nova.compute.provider_tree [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.665 280172 DEBUG nova.virt.libvirt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.689 280172 DEBUG nova.scheduler.client.report [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.735 280172 DEBUG nova.compute.resource_tracker [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.735 280172 DEBUG oslo_concurrency.lockutils [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.735 280172 DEBUG nova.service [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.764 280172 DEBUG nova.service [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m Nov 28 04:45:27 localhost nova_compute[280168]: 2025-11-28 09:45:27.765 280172 DEBUG nova.servicegroup.drivers.db [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] DB_Driver: join new ServiceGroup member np0005538515.localdomain to the compute group, service = join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m Nov 28 04:45:28 localhost python3.9[280444]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None Nov 28 04:45:28 localhost podman[239012]: time="2025-11-28T09:45:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:45:28 localhost systemd[1]: Started libpod-conmon-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d.scope. Nov 28 04:45:28 localhost systemd[1]: Started libcrun container. Nov 28 04:45:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff) Nov 28 04:45:29 localhost podman[280469]: 2025-11-28 09:45:29.002193792 +0000 UTC m=+0.166094375 container init acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute_init, org.label-schema.license=GPLv2) Nov 28 04:45:29 localhost podman[280469]: 2025-11-28 09:45:29.014348149 +0000 UTC m=+0.178248732 container start acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 04:45:29 localhost podman[239012]: @ - - [28/Nov/2025:09:45:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:45:29 localhost python3.9[280444]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init Nov 28 04:45:29 localhost podman[239012]: @ - - [28/Nov/2025:09:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17804 "" "Go-http-client/1.1" Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Applying nova statedir ownership Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/ Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/ Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/ Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/ Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/469bc4441baff9216df986857f9ff45dbf25965a8d2f755a6449ac2645cb7191 Nov 28 04:45:29 localhost nova_compute_init[280488]: INFO:nova_statedir:Nova statedir ownership complete Nov 28 04:45:29 localhost systemd[1]: libpod-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d.scope: Deactivated successfully. Nov 28 04:45:29 localhost podman[280489]: 2025-11-28 09:45:29.098738058 +0000 UTC m=+0.060982944 container died acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}) Nov 28 04:45:29 localhost podman[280502]: 2025-11-28 09:45:29.174025124 +0000 UTC m=+0.071154039 container cleanup acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true) Nov 28 04:45:29 localhost systemd[1]: libpod-conmon-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d.scope: Deactivated successfully. Nov 28 04:45:29 localhost systemd[1]: var-lib-containers-storage-overlay-8d7a1875c2425cad72cafe803874ca1ca683dd2f4b513ab7c102d534a7a81b79-merged.mount: Deactivated successfully. Nov 28 04:45:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acc5612457ab293e4f840ea19b50676bf97e3477bba289ad940bf778a740745d-userdata-shm.mount: Deactivated successfully. Nov 28 04:45:30 localhost systemd[1]: session-59.scope: Deactivated successfully. Nov 28 04:45:30 localhost systemd[1]: session-59.scope: Consumed 1min 28.839s CPU time. Nov 28 04:45:30 localhost systemd-logind[763]: Session 59 logged out. Waiting for processes to exit. Nov 28 04:45:30 localhost systemd-logind[763]: Removed session 59. Nov 28 04:45:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:45:31 localhost podman[280549]: 2025-11-28 09:45:31.985224272 +0000 UTC m=+0.090379945 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 28 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:45:32 localhost podman[280549]: 2025-11-28 09:45:32.0028733 +0000 UTC m=+0.108028933 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:45:32 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:45:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38024 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADDF5A90000000001030307) Nov 28 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:45:32 localhost podman[280569]: 2025-11-28 09:45:32.110753157 +0000 UTC m=+0.104284696 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 28 04:45:32 localhost podman[280569]: 2025-11-28 09:45:32.151532763 +0000 UTC m=+0.145064302 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 04:45:32 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:45:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:45:32 localhost podman[280586]: 2025-11-28 09:45:32.198282783 +0000 UTC m=+0.077879287 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 04:45:32 localhost podman[280601]: 2025-11-28 09:45:32.273247749 +0000 UTC m=+0.074097630 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:45:32 localhost podman[280601]: 2025-11-28 09:45:32.287434539 +0000 UTC m=+0.088284390 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:45:32 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:45:32 localhost podman[280586]: 2025-11-28 09:45:32.309654529 +0000 UTC m=+0.189251023 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Nov 28 04:45:32 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:45:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38025 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADDF9BA0000000001030307) Nov 28 04:45:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58799 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADDFCFB0000000001030307) Nov 28 04:45:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38026 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE01BA0000000001030307) Nov 28 04:45:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18468 DF PROTO=TCP SPT=39560 DPT=9102 SEQ=918741934 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE04FB0000000001030307) Nov 28 04:45:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:45:36 localhost podman[280633]: 2025-11-28 09:45:36.948596569 +0000 UTC m=+0.061838340 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:45:36 localhost podman[280633]: 2025-11-28 09:45:36.960447066 +0000 UTC m=+0.073688807 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:45:36 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:45:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38027 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE117A0000000001030307) Nov 28 04:45:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:45:43 localhost podman[280656]: 2025-11-28 09:45:43.638053124 +0000 UTC m=+0.082290394 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:45:43 localhost podman[280656]: 2025-11-28 09:45:43.652590955 +0000 UTC m=+0.096828265 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible) Nov 28 04:45:43 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:45:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38028 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE30FB0000000001030307) Nov 28 04:45:47 localhost ovn_metadata_agent[158525]: 2025-11-28 09:45:47.617 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 04:45:47 localhost ovn_metadata_agent[158525]: 2025-11-28 09:45:47.619 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 04:45:47 localhost ovn_metadata_agent[158525]: 2025-11-28 09:45:47.622 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 04:45:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:45:50.827 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:45:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:45:50.828 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:45:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:45:50.828 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:45:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:45:53 localhost podman[280675]: 2025-11-28 09:45:53.977992078 +0000 UTC m=+0.081788138 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 04:45:53 localhost podman[280675]: 2025-11-28 09:45:53.992420716 +0000 UTC m=+0.096216786 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6) Nov 28 04:45:54 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:45:57 localhost openstack_network_exporter[240973]: ERROR 09:45:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:45:57 localhost openstack_network_exporter[240973]: ERROR 09:45:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:45:57 localhost openstack_network_exporter[240973]: ERROR 09:45:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:45:57 localhost openstack_network_exporter[240973]: ERROR 09:45:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:45:57 localhost openstack_network_exporter[240973]: Nov 28 04:45:57 localhost openstack_network_exporter[240973]: ERROR 09:45:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:45:57 localhost openstack_network_exporter[240973]: Nov 28 04:45:58 localhost podman[239012]: time="2025-11-28T09:45:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:45:58 localhost podman[239012]: @ - - [28/Nov/2025:09:45:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:45:58 localhost podman[239012]: @ - - [28/Nov/2025:09:45:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17699 "" "Go-http-client/1.1" Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.617 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.618 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:46:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:46:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56160 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE6AD90000000001030307) Nov 28 04:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:46:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:46:02 localhost systemd[1]: tmp-crun.xI1fOk.mount: Deactivated successfully. Nov 28 04:46:02 localhost podman[280703]: 2025-11-28 09:46:02.98684869 +0000 UTC m=+0.076538326 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:46:02 localhost podman[280703]: 2025-11-28 09:46:02.994553859 +0000 UTC m=+0.084243505 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:46:03 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:46:03 localhost podman[280702]: 2025-11-28 09:46:03.031817906 +0000 UTC m=+0.128893951 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 04:46:03 localhost podman[280696]: 2025-11-28 09:46:03.084437339 +0000 UTC m=+0.184934310 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:46:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56161 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE6EFA0000000001030307) Nov 28 04:46:03 localhost podman[280695]: 2025-11-28 09:46:02.962925928 +0000 UTC m=+0.071035295 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 04:46:03 localhost podman[280696]: 2025-11-28 09:46:03.142333564 +0000 UTC m=+0.242830605 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 04:46:03 localhost podman[280695]: 2025-11-28 09:46:03.15057998 +0000 UTC m=+0.258689417 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:46:03 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:46:03 localhost podman[280702]: 2025-11-28 09:46:03.167603028 +0000 UTC m=+0.264679113 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:46:03 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:46:03 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:46:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38029 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE70FA0000000001030307) Nov 28 04:46:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56162 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE76FA0000000001030307) Nov 28 04:46:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58800 DF PROTO=TCP SPT=52604 DPT=9102 SEQ=1885305082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE7AFB0000000001030307) Nov 28 04:46:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:46:07 localhost systemd[1]: tmp-crun.n8xbZT.mount: Deactivated successfully. Nov 28 04:46:07 localhost podman[280781]: 2025-11-28 09:46:07.958322938 +0000 UTC m=+0.068910898 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:46:07 localhost podman[280781]: 2025-11-28 09:46:07.97060071 +0000 UTC m=+0.081188650 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:46:07 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:46:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56163 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADE86BA0000000001030307) Nov 28 04:46:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:46:13 localhost systemd[1]: tmp-crun.Jd1U80.mount: Deactivated successfully. Nov 28 04:46:13 localhost podman[280803]: 2025-11-28 09:46:13.96989191 +0000 UTC m=+0.079166068 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3) Nov 28 04:46:13 localhost podman[280803]: 2025-11-28 09:46:13.988410585 +0000 UTC m=+0.097684773 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:46:14 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:46:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56164 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEA6FA0000000001030307) Nov 28 04:46:20 localhost nova_compute[280168]: 2025-11-28 09:46:20.767 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:20 localhost nova_compute[280168]: 2025-11-28 09:46:20.788 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:46:24 localhost podman[280822]: 2025-11-28 09:46:24.960085889 +0000 UTC m=+0.068934439 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm) Nov 28 04:46:24 localhost podman[280822]: 2025-11-28 09:46:24.971434782 +0000 UTC m=+0.080283352 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, config_id=edpm, io.openshift.expose-services=) Nov 28 04:46:24 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.274 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.275 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.275 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.276 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.276 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.277 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.277 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.277 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.277 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.300 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.301 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.301 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.301 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.302 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:46:25 localhost nova_compute[280168]: 2025-11-28 09:46:25.750 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:25.998 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.000 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12529MB free_disk=41.83693313598633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.001 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.001 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.093 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.093 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.112 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.602 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.609 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.626 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.628 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:46:26 localhost nova_compute[280168]: 2025-11-28 09:46:26.628 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:46:27 localhost openstack_network_exporter[240973]: ERROR 09:46:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:46:27 localhost openstack_network_exporter[240973]: ERROR 09:46:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:46:27 localhost openstack_network_exporter[240973]: ERROR 09:46:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:46:27 localhost openstack_network_exporter[240973]: ERROR 09:46:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:46:27 localhost openstack_network_exporter[240973]: Nov 28 04:46:27 localhost openstack_network_exporter[240973]: ERROR 09:46:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:46:27 localhost openstack_network_exporter[240973]: Nov 28 04:46:28 localhost podman[239012]: time="2025-11-28T09:46:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:46:28 localhost podman[239012]: @ - - [28/Nov/2025:09:46:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:46:28 localhost podman[239012]: @ - - [28/Nov/2025:09:46:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17706 "" "Go-http-client/1.1" Nov 28 04:46:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9353 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEE0090000000001030307) Nov 28 04:46:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9354 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEE3FA0000000001030307) Nov 28 04:46:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56165 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEE6FA0000000001030307) Nov 28 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:46:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:46:33 localhost podman[280979]: 2025-11-28 09:46:33.981275445 +0000 UTC m=+0.074962327 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:46:33 localhost podman[280979]: 2025-11-28 09:46:33.992303787 +0000 UTC m=+0.085990689 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:46:34 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:46:34 localhost podman[280974]: 2025-11-28 09:46:34.037403777 +0000 UTC m=+0.138487809 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible) Nov 28 04:46:34 localhost podman[280974]: 2025-11-28 09:46:34.047313084 +0000 UTC m=+0.148397116 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible) Nov 28 04:46:34 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:46:34 localhost podman[280975]: 2025-11-28 09:46:34.090251047 +0000 UTC m=+0.188012736 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 04:46:34 localhost podman[280976]: 2025-11-28 09:46:34.143029713 +0000 UTC m=+0.238304185 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 28 04:46:34 localhost podman[280975]: 2025-11-28 09:46:34.152682253 +0000 UTC m=+0.250443982 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:46:34 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:46:34 localhost podman[280976]: 2025-11-28 09:46:34.178597237 +0000 UTC m=+0.273871709 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 04:46:34 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:46:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9355 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEEBFA0000000001030307) Nov 28 04:46:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38030 DF PROTO=TCP SPT=50632 DPT=9102 SEQ=2766945851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEEEFA0000000001030307) Nov 28 04:46:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:46:38 localhost podman[281055]: 2025-11-28 09:46:38.972692102 +0000 UTC m=+0.080858650 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:46:38 localhost podman[281055]: 2025-11-28 09:46:38.980581166 +0000 UTC m=+0.088747694 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:46:38 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:46:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9356 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADEFBBB0000000001030307) Nov 28 04:46:42 localhost ovn_controller[152726]: 2025-11-28T09:46:42Z|00037|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory Nov 28 04:46:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:46:44 localhost podman[281079]: 2025-11-28 09:46:44.974796661 +0000 UTC m=+0.079515438 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible) Nov 28 04:46:45 localhost podman[281079]: 2025-11-28 09:46:45.0176636 +0000 UTC m=+0.122382327 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:46:45 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:46:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9357 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF1CFB0000000001030307) Nov 28 04:46:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:46:50.828 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:46:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:46:50.829 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:46:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:46:50.829 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:46:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:46:55 localhost podman[281097]: 2025-11-28 09:46:55.983176416 +0000 UTC m=+0.085863285 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal) Nov 28 04:46:56 localhost podman[281097]: 2025-11-28 09:46:56.024532299 +0000 UTC m=+0.127219148 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:46:56 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:46:57 localhost openstack_network_exporter[240973]: ERROR 09:46:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:46:57 localhost openstack_network_exporter[240973]: ERROR 09:46:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:46:57 localhost openstack_network_exporter[240973]: Nov 28 04:46:57 localhost openstack_network_exporter[240973]: ERROR 09:46:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:46:57 localhost openstack_network_exporter[240973]: ERROR 09:46:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:46:57 localhost openstack_network_exporter[240973]: ERROR 09:46:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:46:57 localhost openstack_network_exporter[240973]: Nov 28 04:46:58 localhost podman[239012]: time="2025-11-28T09:46:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:46:58 localhost podman[239012]: @ - - [28/Nov/2025:09:46:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:46:58 localhost podman[239012]: @ - - [28/Nov/2025:09:46:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17695 "" "Go-http-client/1.1" Nov 28 04:47:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7755 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF55390000000001030307) Nov 28 04:47:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7756 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF593A0000000001030307) Nov 28 04:47:04 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9358 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF5CFA0000000001030307) Nov 28 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:47:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:47:04 localhost podman[281119]: 2025-11-28 09:47:04.989452448 +0000 UTC m=+0.094141367 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller) Nov 28 04:47:05 localhost podman[281118]: 2025-11-28 09:47:05.04232315 +0000 UTC m=+0.149893798 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:47:05 localhost podman[281119]: 2025-11-28 09:47:05.046550751 +0000 UTC m=+0.151239680 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:47:05 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:47:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7757 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF613A0000000001030307) Nov 28 04:47:05 localhost podman[281120]: 2025-11-28 09:47:05.125119587 +0000 UTC m=+0.221210441 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:47:05 localhost podman[281118]: 2025-11-28 09:47:05.132507676 +0000 UTC m=+0.240078364 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:47:05 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:47:05 localhost podman[281125]: 2025-11-28 09:47:05.174473382 +0000 UTC m=+0.272998611 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:47:05 localhost podman[281120]: 2025-11-28 09:47:05.186370289 +0000 UTC m=+0.282461103 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:47:05 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:47:05 localhost podman[281125]: 2025-11-28 09:47:05.206254553 +0000 UTC m=+0.304779782 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:47:05 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:47:05 localhost systemd[1]: tmp-crun.aFfvby.mount: Deactivated successfully. Nov 28 04:47:06 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56166 DF PROTO=TCP SPT=53668 DPT=9102 SEQ=1281477598 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF64FA0000000001030307) Nov 28 04:47:07 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Nov 28 04:47:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7758 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF70FA0000000001030307) Nov 28 04:47:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:47:09 localhost podman[281200]: 2025-11-28 09:47:09.980932057 +0000 UTC m=+0.089048441 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:47:09 localhost podman[281200]: 2025-11-28 09:47:09.990243645 +0000 UTC m=+0.098360069 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:47:10 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:47:13 localhost ovn_controller[152726]: 2025-11-28T09:47:13Z|00038|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory Nov 28 04:47:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:47:15 localhost podman[281222]: 2025-11-28 09:47:15.980317178 +0000 UTC m=+0.084376936 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3) Nov 28 04:47:15 localhost podman[281222]: 2025-11-28 09:47:15.990564215 +0000 UTC m=+0.094623993 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:47:16 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:47:16 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0. Nov 28 04:47:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7759 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADF90FB0000000001030307) Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.621 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.643 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.644 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.644 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.644 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.644 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.645 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.666 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.666 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.667 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.667 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:47:26 localhost nova_compute[280168]: 2025-11-28 09:47:26.667 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:47:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:47:26 localhost podman[281261]: 2025-11-28 09:47:26.9771152 +0000 UTC m=+0.085634954 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=) Nov 28 04:47:27 localhost podman[281261]: 2025-11-28 09:47:27.016125685 +0000 UTC m=+0.124645449 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41) Nov 28 04:47:27 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.223 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.438 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.440 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12503MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.440 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.441 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.532 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.532 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:47:27 localhost nova_compute[280168]: 2025-11-28 09:47:27.578 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:47:27 localhost openstack_network_exporter[240973]: ERROR 09:47:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:47:27 localhost openstack_network_exporter[240973]: ERROR 09:47:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:47:27 localhost openstack_network_exporter[240973]: ERROR 09:47:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:47:27 localhost openstack_network_exporter[240973]: ERROR 09:47:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:47:27 localhost openstack_network_exporter[240973]: Nov 28 04:47:27 localhost openstack_network_exporter[240973]: ERROR 09:47:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:47:27 localhost openstack_network_exporter[240973]: Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.054 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.060 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.072 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.073 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.074 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.668 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.668 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.669 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.669 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.684 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.684 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:28 localhost nova_compute[280168]: 2025-11-28 09:47:28.685 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:47:28 localhost podman[239012]: time="2025-11-28T09:47:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:47:28 localhost podman[239012]: @ - - [28/Nov/2025:09:47:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:47:28 localhost podman[239012]: @ - - [28/Nov/2025:09:47:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17706 "" "Go-http-client/1.1" Nov 28 04:47:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50498 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADFCA690000000001030307) Nov 28 04:47:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50499 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADFCE7A0000000001030307) Nov 28 04:47:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7760 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADFD0FA0000000001030307) Nov 28 04:47:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50500 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADFD67A0000000001030307) Nov 28 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:47:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:47:36 localhost podman[281391]: 2025-11-28 09:47:36.009483157 +0000 UTC m=+0.109049947 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:47:36 localhost podman[281392]: 2025-11-28 09:47:36.05331446 +0000 UTC m=+0.151290052 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:47:36 localhost podman[281392]: 2025-11-28 09:47:36.102883611 +0000 UTC m=+0.200859153 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:47:36 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:47:36 localhost podman[281391]: 2025-11-28 09:47:36.120009159 +0000 UTC m=+0.219575940 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 04:47:36 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:47:36 localhost podman[281393]: 2025-11-28 09:47:36.111479336 +0000 UTC m=+0.204622788 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 04:47:36 localhost podman[281393]: 2025-11-28 09:47:36.197612946 +0000 UTC m=+0.290756448 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:47:36 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:47:36 localhost podman[281394]: 2025-11-28 09:47:36.266904005 +0000 UTC m=+0.357633913 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:47:36 localhost podman[281394]: 2025-11-28 09:47:36.274936883 +0000 UTC m=+0.365666821 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:47:36 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:47:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9359 DF PROTO=TCP SPT=53558 DPT=9102 SEQ=1644628444 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADFDAFA0000000001030307) Nov 28 04:47:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50501 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5ADFE63A0000000001030307) Nov 28 04:47:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:47:40 localhost podman[281475]: 2025-11-28 09:47:40.966244937 +0000 UTC m=+0.071376215 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:47:40 localhost podman[281475]: 2025-11-28 09:47:40.99872613 +0000 UTC m=+0.103857418 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:47:41 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:47:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:47:46 localhost podman[281498]: 2025-11-28 09:47:46.966397318 +0000 UTC m=+0.070696543 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible) Nov 28 04:47:46 localhost podman[281498]: 2025-11-28 09:47:46.981607207 +0000 UTC m=+0.085906502 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:47:46 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:47:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50502 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE006FA0000000001030307) Nov 28 04:47:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:47:50.829 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:47:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:47:50.830 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:47:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:47:50.830 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:47:57 localhost openstack_network_exporter[240973]: ERROR 09:47:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:47:57 localhost openstack_network_exporter[240973]: ERROR 09:47:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:47:57 localhost openstack_network_exporter[240973]: ERROR 09:47:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:47:57 localhost openstack_network_exporter[240973]: ERROR 09:47:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:47:57 localhost openstack_network_exporter[240973]: Nov 28 04:47:57 localhost openstack_network_exporter[240973]: ERROR 09:47:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:47:57 localhost openstack_network_exporter[240973]: Nov 28 04:47:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:47:57 localhost podman[281519]: 2025-11-28 09:47:57.735955334 +0000 UTC m=+0.074491971 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=edpm, vcs-type=git, maintainer=Red Hat, Inc.) Nov 28 04:47:57 localhost podman[281519]: 2025-11-28 09:47:57.747567052 +0000 UTC m=+0.086103699 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, release=1755695350, io.openshift.expose-services=, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container) Nov 28 04:47:57 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:47:58 localhost podman[239012]: time="2025-11-28T09:47:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:47:58 localhost podman[239012]: @ - - [28/Nov/2025:09:47:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:47:58 localhost podman[239012]: @ - - [28/Nov/2025:09:47:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17712 "" "Go-http-client/1.1" Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.619 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:48:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:48:02 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40378 DF PROTO=TCP SPT=52778 DPT=9102 SEQ=640193919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE03F990000000001030307) Nov 28 04:48:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40379 DF PROTO=TCP SPT=52778 DPT=9102 SEQ=640193919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE043BB0000000001030307) Nov 28 04:48:03 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50503 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE046FB0000000001030307) Nov 28 04:48:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40380 DF PROTO=TCP SPT=52778 DPT=9102 SEQ=640193919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE04BBB0000000001030307) Nov 28 04:48:05 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7761 DF PROTO=TCP SPT=36244 DPT=9102 SEQ=2949777023 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE04EFA0000000001030307) Nov 28 04:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:48:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:48:06 localhost podman[281538]: 2025-11-28 09:48:06.9939819 +0000 UTC m=+0.097053678 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Nov 28 04:48:07 localhost podman[281538]: 2025-11-28 09:48:07.030456106 +0000 UTC m=+0.133527884 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:48:07 localhost systemd[1]: tmp-crun.3ds0QA.mount: Deactivated successfully. Nov 28 04:48:07 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:48:07 localhost podman[281539]: 2025-11-28 09:48:07.044986744 +0000 UTC m=+0.144380988 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:48:07 localhost podman[281539]: 2025-11-28 09:48:07.085450834 +0000 UTC m=+0.184845098 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2) Nov 28 04:48:07 localhost podman[281545]: 2025-11-28 09:48:07.100327143 +0000 UTC m=+0.190975147 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:48:07 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:48:07 localhost podman[281545]: 2025-11-28 09:48:07.11449265 +0000 UTC m=+0.205140634 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:48:07 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:48:07 localhost podman[281540]: 2025-11-28 09:48:07.206223013 +0000 UTC m=+0.299490198 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:48:07 localhost podman[281540]: 2025-11-28 09:48:07.213006282 +0000 UTC m=+0.306273457 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:48:07 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:48:09 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40381 DF PROTO=TCP SPT=52778 DPT=9102 SEQ=640193919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE05B7A0000000001030307) Nov 28 04:48:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:48:11 localhost podman[281621]: 2025-11-28 09:48:11.97451849 +0000 UTC m=+0.086994087 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:48:11 localhost podman[281621]: 2025-11-28 09:48:11.985566261 +0000 UTC m=+0.098041908 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:48:11 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:48:17 localhost sshd[281644]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:48:17 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40382 DF PROTO=TCP SPT=52778 DPT=9102 SEQ=640193919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE07AFA0000000001030307) Nov 28 04:48:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:48:17 localhost systemd-logind[763]: New session 61 of user zuul. Nov 28 04:48:17 localhost systemd[1]: Started Session 61 of User zuul. Nov 28 04:48:17 localhost podman[281646]: 2025-11-28 09:48:17.350018876 +0000 UTC m=+0.081250309 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 28 04:48:17 localhost podman[281646]: 2025-11-28 09:48:17.386643687 +0000 UTC m=+0.117875090 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:48:17 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:48:17 localhost python3[281685]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:48:18 localhost subscription-manager[281686]: Unregistered machine with identity: c20224ed-ba86-41a6-a487-b9546587a93c Nov 28 04:48:26 localhost nova_compute[280168]: 2025-11-28 09:48:26.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:26 localhost nova_compute[280168]: 2025-11-28 09:48:26.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:26 localhost nova_compute[280168]: 2025-11-28 09:48:26.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:48:27 localhost nova_compute[280168]: 2025-11-28 09:48:27.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:27 localhost openstack_network_exporter[240973]: ERROR 09:48:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:48:27 localhost openstack_network_exporter[240973]: ERROR 09:48:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:48:27 localhost openstack_network_exporter[240973]: ERROR 09:48:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:48:27 localhost openstack_network_exporter[240973]: ERROR 09:48:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:48:27 localhost openstack_network_exporter[240973]: Nov 28 04:48:27 localhost openstack_network_exporter[240973]: ERROR 09:48:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:48:27 localhost openstack_network_exporter[240973]: Nov 28 04:48:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:48:27 localhost podman[281688]: 2025-11-28 09:48:27.976263317 +0000 UTC m=+0.082223819 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:48:27 localhost podman[281688]: 2025-11-28 09:48:27.987442462 +0000 UTC m=+0.093402914 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:48:28 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.253 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.253 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.253 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.254 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.283 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.284 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.284 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.284 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.285 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.745 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:48:28 localhost podman[239012]: time="2025-11-28T09:48:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:48:28 localhost podman[239012]: @ - - [28/Nov/2025:09:48:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.960 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.962 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12517MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.963 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:48:28 localhost nova_compute[280168]: 2025-11-28 09:48:28.963 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:48:28 localhost podman[239012]: @ - - [28/Nov/2025:09:48:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17703 "" "Go-http-client/1.1" Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.170 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.171 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.196 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.665 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.671 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.775 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.778 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:48:29 localhost nova_compute[280168]: 2025-11-28 09:48:29.778 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:48:30 localhost nova_compute[280168]: 2025-11-28 09:48:30.764 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:48:32 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34317 DF PROTO=TCP SPT=39882 DPT=9102 SEQ=3804483572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0B4C90000000001030307) Nov 28 04:48:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34318 DF PROTO=TCP SPT=39882 DPT=9102 SEQ=3804483572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0B8BA0000000001030307) Nov 28 04:48:33 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40383 DF PROTO=TCP SPT=52778 DPT=9102 SEQ=640193919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0BAFA0000000001030307) Nov 28 04:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:48:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 4850 writes, 21K keys, 4850 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 4850 writes, 661 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 66 writes, 202 keys, 66 commit groups, 1.0 writes per commit group, ingest: 0.25 MB, 0.00 MB/s#012Interval WAL: 66 writes, 24 syncs, 2.75 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:48:35 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34319 DF PROTO=TCP SPT=39882 DPT=9102 SEQ=3804483572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0C0BA0000000001030307) Nov 28 04:48:36 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50504 DF PROTO=TCP SPT=48254 DPT=9102 SEQ=1993357398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0C4FA0000000001030307) Nov 28 04:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:48:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:48:38 localhost podman[281892]: 2025-11-28 09:48:38.009328462 +0000 UTC m=+0.095780198 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:48:38 localhost podman[281892]: 2025-11-28 09:48:38.017374141 +0000 UTC m=+0.103825807 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:48:38 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:48:38 localhost podman[281889]: 2025-11-28 09:48:38.062816424 +0000 UTC m=+0.159960160 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:48:38 localhost podman[281891]: 2025-11-28 09:48:38.106536714 +0000 UTC m=+0.196693484 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:48:38 localhost podman[281889]: 2025-11-28 09:48:38.123203429 +0000 UTC m=+0.220347165 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 04:48:38 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:48:38 localhost podman[281891]: 2025-11-28 09:48:38.140509513 +0000 UTC m=+0.230666333 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:48:38 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:48:38 localhost podman[281890]: 2025-11-28 09:48:38.215335964 +0000 UTC m=+0.308996152 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:48:38 localhost podman[281890]: 2025-11-28 09:48:38.294619881 +0000 UTC m=+0.388280129 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:48:38 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:48:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.2 total, 600.0 interval#012Cumulative writes: 5854 writes, 25K keys, 5854 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5854 writes, 763 syncs, 7.67 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 73 writes, 221 keys, 73 commit groups, 1.0 writes per commit group, ingest: 0.35 MB, 0.00 MB/s#012Interval WAL: 73 writes, 34 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:48:39 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34320 DF PROTO=TCP SPT=39882 DPT=9102 SEQ=3804483572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0D07A0000000001030307) Nov 28 04:48:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:48:42 localhost podman[281975]: 2025-11-28 09:48:42.993566179 +0000 UTC m=+0.097989717 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:48:43 localhost podman[281975]: 2025-11-28 09:48:43.02471166 +0000 UTC m=+0.129135268 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:48:43 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:48:47 localhost kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:11:5f:6c MACDST=fa:16:3e:f7:e2:83 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34321 DF PROTO=TCP SPT=39882 DPT=9102 SEQ=3804483572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080A5AE0F0FA0000000001030307) Nov 28 04:48:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:48:47 localhost podman[281998]: 2025-11-28 09:48:47.993045184 +0000 UTC m=+0.096351615 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:48:48 localhost podman[281998]: 2025-11-28 09:48:48.00684062 +0000 UTC m=+0.110147001 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 28 04:48:48 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:48:49 localhost kernel: DROPPING: IN=eth0 OUT= MACSRC=c6:e7:bc:23:0b:06 MACDST=fa:16:3e:93:ca:2d MACPROTO=0800 SRC=3.138.197.221 DST=38.102.83.53 LEN=52 TOS=0x00 PREC=0x00 TTL=50 ID=37683 PROTO=TCP SPT=59427 DPT=9090 SEQ=1778054124 ACK=0 WINDOW=65535 RES=0x00 SYN URGP=0 OPT (020405B40103030801010402) Nov 28 04:48:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:48:50.830 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:48:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:48:50.831 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:48:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:48:50.832 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:48:57 localhost sshd[282071]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:48:57 localhost systemd[1]: Created slice User Slice of UID 1003. Nov 28 04:48:57 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Nov 28 04:48:57 localhost systemd-logind[763]: New session 62 of user tripleo-admin. Nov 28 04:48:57 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Nov 28 04:48:57 localhost openstack_network_exporter[240973]: ERROR 09:48:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:48:57 localhost openstack_network_exporter[240973]: ERROR 09:48:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:48:57 localhost openstack_network_exporter[240973]: ERROR 09:48:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:48:57 localhost openstack_network_exporter[240973]: ERROR 09:48:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:48:57 localhost openstack_network_exporter[240973]: Nov 28 04:48:57 localhost openstack_network_exporter[240973]: ERROR 09:48:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:48:57 localhost openstack_network_exporter[240973]: Nov 28 04:48:57 localhost systemd[1]: Starting User Manager for UID 1003... Nov 28 04:48:57 localhost systemd[282075]: Queued start job for default target Main User Target. Nov 28 04:48:57 localhost systemd[282075]: Created slice User Application Slice. Nov 28 04:48:57 localhost systemd[282075]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 04:48:57 localhost systemd[282075]: Started Daily Cleanup of User's Temporary Directories. Nov 28 04:48:57 localhost systemd[282075]: Reached target Paths. Nov 28 04:48:57 localhost systemd[282075]: Reached target Timers. Nov 28 04:48:57 localhost systemd[282075]: Starting D-Bus User Message Bus Socket... Nov 28 04:48:57 localhost systemd[282075]: Starting Create User's Volatile Files and Directories... Nov 28 04:48:57 localhost systemd[282075]: Listening on D-Bus User Message Bus Socket. Nov 28 04:48:57 localhost systemd[282075]: Reached target Sockets. Nov 28 04:48:57 localhost systemd[282075]: Finished Create User's Volatile Files and Directories. Nov 28 04:48:57 localhost systemd[282075]: Reached target Basic System. Nov 28 04:48:57 localhost systemd[282075]: Reached target Main User Target. Nov 28 04:48:57 localhost systemd[282075]: Startup finished in 147ms. Nov 28 04:48:57 localhost systemd[1]: Started User Manager for UID 1003. Nov 28 04:48:57 localhost systemd[1]: Started Session 62 of User tripleo-admin. Nov 28 04:48:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:48:58 localhost podman[282218]: 2025-11-28 09:48:58.438238756 +0000 UTC m=+0.074401069 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 28 04:48:58 localhost podman[282218]: 2025-11-28 09:48:58.451491704 +0000 UTC m=+0.087654037 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=) Nov 28 04:48:58 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:48:58 localhost python3[282219]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)#012add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"#012# 100 ceph_dashboard (8443)#012add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"#012# 100 ceph_grafana (3100)#012add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"#012# 100 ceph_prometheus (9092)#012add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"#012# 100 ceph_rgw (8080)#012add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"#012# 110 ceph_mon (6789, 3300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"#012# 112 ceph_mds (6800-7300, 9100)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"#012# 113 ceph_mgr (6800-7300, 8444)#012add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"#012# 120 ceph_nfs (2049, 12049)#012add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"#012# 123 ceph_dashboard (9090, 9094, 9283)#012add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"#012 insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:48:58 localhost systemd-journald[48427]: Field hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation. Nov 28 04:48:58 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 04:48:58 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:48:58 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 04:48:58 localhost podman[239012]: time="2025-11-28T09:48:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:48:58 localhost podman[239012]: @ - - [28/Nov/2025:09:48:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149991 "" "Go-http-client/1.1" Nov 28 04:48:58 localhost podman[239012]: @ - - [28/Nov/2025:09:48:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17704 "" "Go-http-client/1.1" Nov 28 04:48:59 localhost python3[282381]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None Nov 28 04:49:00 localhost systemd[1]: Stopping Netfilter Tables... Nov 28 04:49:00 localhost systemd[1]: nftables.service: Deactivated successfully. Nov 28 04:49:00 localhost systemd[1]: Stopped Netfilter Tables. Nov 28 04:49:00 localhost systemd[1]: Starting Netfilter Tables... Nov 28 04:49:00 localhost systemd[1]: Finished Netfilter Tables. Nov 28 04:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:49:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:49:08 localhost podman[282499]: 2025-11-28 09:49:08.509256033 +0000 UTC m=+0.081139226 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:49:08 localhost podman[282499]: 2025-11-28 09:49:08.519909742 +0000 UTC m=+0.091792985 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:49:08 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:49:08 localhost systemd[1]: tmp-crun.9TS4d5.mount: Deactivated successfully. Nov 28 04:49:08 localhost podman[282497]: 2025-11-28 09:49:08.567918234 +0000 UTC m=+0.143567984 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:49:08 localhost podman[282498]: 2025-11-28 09:49:08.705853773 +0000 UTC m=+0.279299965 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 04:49:08 localhost podman[282496]: 2025-11-28 09:49:08.714375616 +0000 UTC m=+0.292065808 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:49:08 localhost podman[282496]: 2025-11-28 09:49:08.726251802 +0000 UTC m=+0.303941994 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Nov 28 04:49:08 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:49:08 localhost podman[282498]: 2025-11-28 09:49:08.741341449 +0000 UTC m=+0.314787651 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS) Nov 28 04:49:08 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:49:08 localhost podman[282497]: 2025-11-28 09:49:08.791861879 +0000 UTC m=+0.367511659 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 04:49:08 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:49:09 localhost systemd[1]: tmp-crun.vzlSjZ.mount: Deactivated successfully. Nov 28 04:49:11 localhost podman[282673]: Nov 28 04:49:11 localhost podman[282673]: 2025-11-28 09:49:11.391997912 +0000 UTC m=+0.064266365 container create 5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_banach, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux , RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, CEPH_POINT_RELEASE=, version=7, release=553, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-type=git, com.redhat.component=rhceph-container) Nov 28 04:49:11 localhost systemd[1]: Started libpod-conmon-5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299.scope. Nov 28 04:49:11 localhost systemd[1]: Started libcrun container. Nov 28 04:49:11 localhost podman[282673]: 2025-11-28 09:49:11.364967527 +0000 UTC m=+0.037235990 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:49:11 localhost podman[282673]: 2025-11-28 09:49:11.468598247 +0000 UTC m=+0.140866720 container init 5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_banach, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, RELEASE=main, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, release=553, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:49:11 localhost podman[282673]: 2025-11-28 09:49:11.480603127 +0000 UTC m=+0.152871580 container start 5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_banach, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , architecture=x86_64, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:49:11 localhost podman[282673]: 2025-11-28 09:49:11.480846684 +0000 UTC m=+0.153115157 container attach 5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_banach, io.buildah.version=1.33.12, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:49:11 localhost strange_banach[282688]: 167 167 Nov 28 04:49:11 localhost systemd[1]: libpod-5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299.scope: Deactivated successfully. Nov 28 04:49:11 localhost podman[282673]: 2025-11-28 09:49:11.485612882 +0000 UTC m=+0.157881385 container died 5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_banach, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:49:11 localhost podman[282693]: 2025-11-28 09:49:11.588235421 +0000 UTC m=+0.089780043 container remove 5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_banach, io.buildah.version=1.33.12, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=553, ceph=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:49:11 localhost systemd[1]: libpod-conmon-5c39b91f0e214bc2d9ddd580c265dd0c78d922f43a89f77bf8a2c196fd47b299.scope: Deactivated successfully. Nov 28 04:49:11 localhost systemd[1]: Reloading. Nov 28 04:49:11 localhost systemd-sysv-generator[282738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:49:11 localhost systemd-rc-local-generator[282734]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:11 localhost systemd[1]: var-lib-containers-storage-overlay-e69a33fd7597a815f85353cd5144c9887f9735726adf4c7894616900505306ec-merged.mount: Deactivated successfully. Nov 28 04:49:12 localhost systemd[1]: Reloading. Nov 28 04:49:12 localhost systemd-rc-local-generator[282780]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:49:12 localhost systemd-sysv-generator[282783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:49:12 localhost systemd[1]: Starting Ceph mds.mds.np0005538515.anvatb for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 04:49:12 localhost podman[282840]: Nov 28 04:49:12 localhost podman[282840]: 2025-11-28 09:49:12.850442813 +0000 UTC m=+0.086280244 container create c8b5ff4ae49aa0080332f0f2f830f5e8e5b9a599ac27dabefec286af414abefd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mds-mds-np0005538515-anvatb, version=7, build-date=2025-09-24T08:57:55, distribution-scope=public, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True) Nov 28 04:49:12 localhost systemd[1]: tmp-crun.KgNu5x.mount: Deactivated successfully. Nov 28 04:49:12 localhost podman[282840]: 2025-11-28 09:49:12.815601608 +0000 UTC m=+0.051439059 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:49:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf1c1705508931bd14cca6ca9a7a98cdec79f0341c06bb9a11dbf9c764b05b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf1c1705508931bd14cca6ca9a7a98cdec79f0341c06bb9a11dbf9c764b05b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf1c1705508931bd14cca6ca9a7a98cdec79f0341c06bb9a11dbf9c764b05b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf1c1705508931bd14cca6ca9a7a98cdec79f0341c06bb9a11dbf9c764b05b6/merged/var/lib/ceph/mds/ceph-mds.np0005538515.anvatb supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:12 localhost podman[282840]: 2025-11-28 09:49:12.931172086 +0000 UTC m=+0.167009567 container init c8b5ff4ae49aa0080332f0f2f830f5e8e5b9a599ac27dabefec286af414abefd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mds-mds-np0005538515-anvatb, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, release=553, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:49:12 localhost podman[282840]: 2025-11-28 09:49:12.94166785 +0000 UTC m=+0.177505271 container start c8b5ff4ae49aa0080332f0f2f830f5e8e5b9a599ac27dabefec286af414abefd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mds-mds-np0005538515-anvatb, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vcs-type=git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., GIT_BRANCH=main, release=553, ceph=True, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:49:12 localhost bash[282840]: c8b5ff4ae49aa0080332f0f2f830f5e8e5b9a599ac27dabefec286af414abefd Nov 28 04:49:12 localhost systemd[1]: Started Ceph mds.mds.np0005538515.anvatb for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 04:49:13 localhost ceph-mds[282859]: set uid:gid to 167:167 (ceph:ceph) Nov 28 04:49:13 localhost ceph-mds[282859]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2 Nov 28 04:49:13 localhost ceph-mds[282859]: main not setting numa affinity Nov 28 04:49:13 localhost ceph-mds[282859]: pidfile_write: ignore empty --pid-file Nov 28 04:49:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mds-mds-np0005538515-anvatb[282855]: starting mds.mds.np0005538515.anvatb at Nov 28 04:49:13 localhost ceph-mds[282859]: mds.mds.np0005538515.anvatb Updating MDS map to version 7 from mon.1 Nov 28 04:49:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:49:13 localhost podman[282878]: 2025-11-28 09:49:13.478397005 +0000 UTC m=+0.081365945 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:49:13 localhost podman[282878]: 2025-11-28 09:49:13.492420257 +0000 UTC m=+0.095389167 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:49:13 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:49:14 localhost ceph-mds[282859]: mds.mds.np0005538515.anvatb Updating MDS map to version 8 from mon.1 Nov 28 04:49:14 localhost ceph-mds[282859]: mds.mds.np0005538515.anvatb Monitors have assigned me to become a standby. Nov 28 04:49:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:49:18 localhost podman[282921]: 2025-11-28 09:49:18.278229245 +0000 UTC m=+0.094379545 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:49:18 localhost podman[282921]: 2025-11-28 09:49:18.294496336 +0000 UTC m=+0.110646696 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 28 04:49:18 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:49:18 localhost systemd[1]: session-61.scope: Deactivated successfully. Nov 28 04:49:18 localhost systemd-logind[763]: Session 61 logged out. Waiting for processes to exit. Nov 28 04:49:18 localhost systemd-logind[763]: Removed session 61. Nov 28 04:49:19 localhost systemd[1]: tmp-crun.VEdhdK.mount: Deactivated successfully. Nov 28 04:49:19 localhost podman[283048]: 2025-11-28 09:49:19.250806307 +0000 UTC m=+0.097283794 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, release=553, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, version=7, RELEASE=main, distribution-scope=public, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:49:19 localhost podman[283048]: 2025-11-28 09:49:19.346378478 +0000 UTC m=+0.192855955 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.openshift.tags=rhceph ceph, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.) Nov 28 04:49:26 localhost nova_compute[280168]: 2025-11-28 09:49:26.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:26 localhost nova_compute[280168]: 2025-11-28 09:49:26.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:26 localhost nova_compute[280168]: 2025-11-28 09:49:26.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:49:27 localhost nova_compute[280168]: 2025-11-28 09:49:27.235 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:27 localhost openstack_network_exporter[240973]: ERROR 09:49:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:49:27 localhost openstack_network_exporter[240973]: ERROR 09:49:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:49:27 localhost openstack_network_exporter[240973]: ERROR 09:49:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:49:27 localhost openstack_network_exporter[240973]: ERROR 09:49:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:49:27 localhost openstack_network_exporter[240973]: Nov 28 04:49:27 localhost openstack_network_exporter[240973]: ERROR 09:49:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:49:27 localhost openstack_network_exporter[240973]: Nov 28 04:49:28 localhost nova_compute[280168]: 2025-11-28 09:49:28.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:28 localhost nova_compute[280168]: 2025-11-28 09:49:28.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:49:28 localhost nova_compute[280168]: 2025-11-28 09:49:28.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:49:28 localhost nova_compute[280168]: 2025-11-28 09:49:28.258 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:49:28 localhost nova_compute[280168]: 2025-11-28 09:49:28.258 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:49:28 localhost podman[239012]: time="2025-11-28T09:49:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:49:28 localhost podman[239012]: @ - - [28/Nov/2025:09:49:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152069 "" "Go-http-client/1.1" Nov 28 04:49:28 localhost podman[239012]: @ - - [28/Nov/2025:09:49:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18194 "" "Go-http-client/1.1" Nov 28 04:49:29 localhost podman[283169]: 2025-11-28 09:49:29.057758335 +0000 UTC m=+0.158304071 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container) Nov 28 04:49:29 localhost podman[283169]: 2025-11-28 09:49:29.076722629 +0000 UTC m=+0.177268385 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git) Nov 28 04:49:29 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:49:29 localhost nova_compute[280168]: 2025-11-28 09:49:29.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:29 localhost nova_compute[280168]: 2025-11-28 09:49:29.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:29 localhost nova_compute[280168]: 2025-11-28 09:49:29.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.265 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.266 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.266 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.266 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.267 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.758 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.960 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.962 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12499MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.962 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:49:30 localhost nova_compute[280168]: 2025-11-28 09:49:30.963 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.052 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.053 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.074 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.546 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.553 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.571 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.574 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:49:31 localhost nova_compute[280168]: 2025-11-28 09:49:31.574 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:49:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:49:39 localhost podman[283235]: 2025-11-28 09:49:39.017548684 +0000 UTC m=+0.114797933 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3) Nov 28 04:49:39 localhost podman[283236]: 2025-11-28 09:49:39.078623624 +0000 UTC m=+0.172422107 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 04:49:39 localhost podman[283235]: 2025-11-28 09:49:39.10839631 +0000 UTC m=+0.205645599 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:49:39 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:49:39 localhost podman[283234]: 2025-11-28 09:49:39.125053141 +0000 UTC m=+0.226237860 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 04:49:39 localhost podman[283234]: 2025-11-28 09:49:39.138278839 +0000 UTC m=+0.239463558 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:49:39 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:49:39 localhost podman[283237]: 2025-11-28 09:49:39.183994306 +0000 UTC m=+0.274698893 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:49:39 localhost podman[283236]: 2025-11-28 09:49:39.211619325 +0000 UTC m=+0.305417838 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:49:39 localhost podman[283237]: 2025-11-28 09:49:39.221508859 +0000 UTC m=+0.312213426 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:49:39 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:49:39 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:49:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:49:43 localhost podman[283321]: 2025-11-28 09:49:43.646135316 +0000 UTC m=+0.085496131 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:49:43 localhost podman[283321]: 2025-11-28 09:49:43.653748631 +0000 UTC m=+0.093109446 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:49:43 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:49:47 localhost podman[283492]: Nov 28 04:49:47 localhost podman[283492]: 2025-11-28 09:49:47.050771293 +0000 UTC m=+0.088811024 container create 2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_agnesi, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , ceph=True, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public) Nov 28 04:49:47 localhost systemd[1]: Started libpod-conmon-2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd.scope. Nov 28 04:49:47 localhost systemd[1]: Started libcrun container. Nov 28 04:49:47 localhost podman[283492]: 2025-11-28 09:49:47.013996112 +0000 UTC m=+0.052035853 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:49:47 localhost podman[283492]: 2025-11-28 09:49:47.126383989 +0000 UTC m=+0.164423690 container init 2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_agnesi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.33.12, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, ceph=True, RELEASE=main, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, name=rhceph, release=553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55) Nov 28 04:49:47 localhost podman[283492]: 2025-11-28 09:49:47.135383146 +0000 UTC m=+0.173422857 container start 2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_agnesi, maintainer=Guillaume Abrioux , version=7, name=rhceph, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, build-date=2025-09-24T08:57:55) Nov 28 04:49:47 localhost podman[283492]: 2025-11-28 09:49:47.135758907 +0000 UTC m=+0.173798658 container attach 2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_agnesi, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, ceph=True, name=rhceph, io.buildah.version=1.33.12) Nov 28 04:49:47 localhost blissful_agnesi[283508]: 167 167 Nov 28 04:49:47 localhost systemd[1]: libpod-2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd.scope: Deactivated successfully. Nov 28 04:49:47 localhost podman[283492]: 2025-11-28 09:49:47.141605367 +0000 UTC m=+0.179645098 container died 2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_agnesi, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, ceph=True, release=553, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12) Nov 28 04:49:47 localhost podman[283513]: 2025-11-28 09:49:47.239905961 +0000 UTC m=+0.088476682 container remove 2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=blissful_agnesi, GIT_CLEAN=True, ceph=True, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, vendor=Red Hat, Inc., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Nov 28 04:49:47 localhost systemd[1]: libpod-conmon-2a47286e1af48908d44d463f619c371f623df6447d47b9112591e56778add5dd.scope: Deactivated successfully. Nov 28 04:49:47 localhost podman[283534]: Nov 28 04:49:47 localhost podman[283534]: 2025-11-28 09:49:47.445495727 +0000 UTC m=+0.050180415 container create e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_torvalds, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=) Nov 28 04:49:47 localhost systemd[1]: Started libpod-conmon-e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd.scope. Nov 28 04:49:47 localhost systemd[1]: Started libcrun container. Nov 28 04:49:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9c528d1e805a12a374a1f3b902cddf269704c3c35566f6b31c54da0d04151aa/merged/rootfs supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9c528d1e805a12a374a1f3b902cddf269704c3c35566f6b31c54da0d04151aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9c528d1e805a12a374a1f3b902cddf269704c3c35566f6b31c54da0d04151aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9c528d1e805a12a374a1f3b902cddf269704c3c35566f6b31c54da0d04151aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 04:49:47 localhost podman[283534]: 2025-11-28 09:49:47.506225375 +0000 UTC m=+0.110910073 container init e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_torvalds, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, release=553, vcs-type=git, io.buildah.version=1.33.12, ceph=True, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main) Nov 28 04:49:47 localhost podman[283534]: 2025-11-28 09:49:47.518871604 +0000 UTC m=+0.123556322 container start e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_torvalds, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main) Nov 28 04:49:47 localhost podman[283534]: 2025-11-28 09:49:47.519239216 +0000 UTC m=+0.123924114 container attach e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_torvalds, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, architecture=x86_64, RELEASE=main, release=553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:49:47 localhost podman[283534]: 2025-11-28 09:49:47.42674389 +0000 UTC m=+0.031428598 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:49:48 localhost systemd[1]: var-lib-containers-storage-overlay-7b6acf2ff29f315409e545cb09d88ef8e6695b6d9535098c83b8dada17045a83-merged.mount: Deactivated successfully. Nov 28 04:49:48 localhost admiring_torvalds[283549]: [ Nov 28 04:49:48 localhost admiring_torvalds[283549]: { Nov 28 04:49:48 localhost admiring_torvalds[283549]: "available": false, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "ceph_device": false, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "device_id": "QEMU_DVD-ROM_QM00001", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "lsm_data": {}, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "lvs": [], Nov 28 04:49:48 localhost admiring_torvalds[283549]: "path": "/dev/sr0", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "rejected_reasons": [ Nov 28 04:49:48 localhost admiring_torvalds[283549]: "Insufficient space (<5GB)", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "Has a FileSystem" Nov 28 04:49:48 localhost admiring_torvalds[283549]: ], Nov 28 04:49:48 localhost admiring_torvalds[283549]: "sys_api": { Nov 28 04:49:48 localhost admiring_torvalds[283549]: "actuators": null, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "device_nodes": "sr0", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "human_readable_size": "482.00 KB", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "id_bus": "ata", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "model": "QEMU DVD-ROM", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "nr_requests": "2", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "partitions": {}, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "path": "/dev/sr0", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "removable": "1", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "rev": "2.5+", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "ro": "0", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "rotational": "1", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "sas_address": "", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "sas_device_handle": "", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "scheduler_mode": "mq-deadline", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "sectors": 0, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "sectorsize": "2048", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "size": 493568.0, Nov 28 04:49:48 localhost admiring_torvalds[283549]: "support_discard": "0", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "type": "disk", Nov 28 04:49:48 localhost admiring_torvalds[283549]: "vendor": "QEMU" Nov 28 04:49:48 localhost admiring_torvalds[283549]: } Nov 28 04:49:48 localhost admiring_torvalds[283549]: } Nov 28 04:49:48 localhost admiring_torvalds[283549]: ] Nov 28 04:49:48 localhost systemd[1]: libpod-e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd.scope: Deactivated successfully. Nov 28 04:49:48 localhost systemd[1]: libpod-e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd.scope: Consumed 1.155s CPU time. Nov 28 04:49:48 localhost podman[283534]: 2025-11-28 09:49:48.63034748 +0000 UTC m=+1.235032168 container died e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_torvalds, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, name=rhceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Nov 28 04:49:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:49:48 localhost systemd[1]: var-lib-containers-storage-overlay-f9c528d1e805a12a374a1f3b902cddf269704c3c35566f6b31c54da0d04151aa-merged.mount: Deactivated successfully. Nov 28 04:49:48 localhost podman[285468]: 2025-11-28 09:49:48.777294281 +0000 UTC m=+0.110326435 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 04:49:48 localhost podman[285462]: 2025-11-28 09:49:48.8000177 +0000 UTC m=+0.156815936 container remove e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_torvalds, release=553, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.33.12, name=rhceph, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:49:48 localhost systemd[1]: libpod-conmon-e38edb8798d76b311f83f67cf2ab8a8ca5df320f69bc123e8d898a74db50d6cd.scope: Deactivated successfully. Nov 28 04:49:48 localhost podman[285468]: 2025-11-28 09:49:48.820568902 +0000 UTC m=+0.153601076 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:49:48 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:49:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:49:50.832 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:49:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:49:50.832 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:49:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:49:50.833 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:49:57 localhost openstack_network_exporter[240973]: ERROR 09:49:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:49:57 localhost openstack_network_exporter[240973]: ERROR 09:49:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:49:57 localhost openstack_network_exporter[240973]: ERROR 09:49:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:49:57 localhost openstack_network_exporter[240973]: ERROR 09:49:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:49:57 localhost openstack_network_exporter[240973]: Nov 28 04:49:57 localhost openstack_network_exporter[240973]: ERROR 09:49:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:49:57 localhost openstack_network_exporter[240973]: Nov 28 04:49:58 localhost podman[239012]: time="2025-11-28T09:49:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:49:58 localhost podman[239012]: @ - - [28/Nov/2025:09:49:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152069 "" "Go-http-client/1.1" Nov 28 04:49:58 localhost podman[239012]: @ - - [28/Nov/2025:09:49:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18198 "" "Go-http-client/1.1" Nov 28 04:49:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:49:59 localhost podman[285498]: 2025-11-28 09:49:59.983982665 +0000 UTC m=+0.089180234 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter) Nov 28 04:50:00 localhost podman[285498]: 2025-11-28 09:50:00.000696869 +0000 UTC m=+0.105894438 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6) Nov 28 04:50:00 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:50:00 localhost systemd[1]: session-62.scope: Deactivated successfully. Nov 28 04:50:00 localhost systemd[1]: session-62.scope: Consumed 1.378s CPU time. Nov 28 04:50:00 localhost systemd-logind[763]: Session 62 logged out. Waiting for processes to exit. Nov 28 04:50:00 localhost systemd-logind[763]: Removed session 62. Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.621 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:50:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:50:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:50:09 localhost podman[285537]: 2025-11-28 09:50:09.98255481 +0000 UTC m=+0.080647452 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Nov 28 04:50:09 localhost podman[285537]: 2025-11-28 09:50:09.992199936 +0000 UTC m=+0.090292578 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:50:10 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:50:10 localhost systemd[1]: tmp-crun.wM4piK.mount: Deactivated successfully. Nov 28 04:50:10 localhost podman[285536]: 2025-11-28 09:50:10.040910225 +0000 UTC m=+0.139098570 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:50:10 localhost systemd[1]: tmp-crun.sw4ABv.mount: Deactivated successfully. Nov 28 04:50:10 localhost podman[285538]: 2025-11-28 09:50:10.086667733 +0000 UTC m=+0.180140404 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:50:10 localhost podman[285538]: 2025-11-28 09:50:10.097373103 +0000 UTC m=+0.190845784 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:50:10 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:50:10 localhost podman[285536]: 2025-11-28 09:50:10.114359595 +0000 UTC m=+0.212547890 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller) Nov 28 04:50:10 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:50:10 localhost podman[285535]: 2025-11-28 09:50:10.192788268 +0000 UTC m=+0.292925243 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Nov 28 04:50:10 localhost podman[285535]: 2025-11-28 09:50:10.20749725 +0000 UTC m=+0.307634215 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Nov 28 04:50:10 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:50:10 localhost systemd[1]: Stopping User Manager for UID 1003... Nov 28 04:50:10 localhost systemd[282075]: Activating special unit Exit the Session... Nov 28 04:50:10 localhost systemd[282075]: Stopped target Main User Target. Nov 28 04:50:10 localhost systemd[282075]: Stopped target Basic System. Nov 28 04:50:10 localhost systemd[282075]: Stopped target Paths. Nov 28 04:50:10 localhost systemd[282075]: Stopped target Sockets. Nov 28 04:50:10 localhost systemd[282075]: Stopped target Timers. Nov 28 04:50:10 localhost systemd[282075]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 28 04:50:10 localhost systemd[282075]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 04:50:10 localhost systemd[282075]: Closed D-Bus User Message Bus Socket. Nov 28 04:50:10 localhost systemd[282075]: Stopped Create User's Volatile Files and Directories. Nov 28 04:50:10 localhost systemd[282075]: Removed slice User Application Slice. Nov 28 04:50:10 localhost systemd[282075]: Reached target Shutdown. Nov 28 04:50:10 localhost systemd[282075]: Finished Exit the Session. Nov 28 04:50:10 localhost systemd[282075]: Reached target Exit the Session. Nov 28 04:50:10 localhost systemd[1]: user@1003.service: Deactivated successfully. Nov 28 04:50:10 localhost systemd[1]: Stopped User Manager for UID 1003. Nov 28 04:50:10 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Nov 28 04:50:10 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Nov 28 04:50:10 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Nov 28 04:50:10 localhost systemd[1]: Removed slice User Slice of UID 1003. Nov 28 04:50:10 localhost systemd[1]: user-1003.slice: Consumed 1.809s CPU time. Nov 28 04:50:10 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Nov 28 04:50:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:50:13 localhost podman[285658]: 2025-11-28 09:50:13.989416774 +0000 UTC m=+0.090605389 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:50:14 localhost podman[285658]: 2025-11-28 09:50:14.002534257 +0000 UTC m=+0.103722772 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:50:14 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:50:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:50:18 localhost podman[285681]: 2025-11-28 09:50:18.977876266 +0000 UTC m=+0.085026227 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:50:19 localhost podman[285681]: 2025-11-28 09:50:19.016678649 +0000 UTC m=+0.123828550 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:50:19 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:50:25 localhost nova_compute[280168]: 2025-11-28 09:50:25.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:25 localhost nova_compute[280168]: 2025-11-28 09:50:25.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 04:50:25 localhost nova_compute[280168]: 2025-11-28 09:50:25.267 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 04:50:25 localhost nova_compute[280168]: 2025-11-28 09:50:25.268 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:25 localhost nova_compute[280168]: 2025-11-28 09:50:25.268 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 04:50:25 localhost nova_compute[280168]: 2025-11-28 09:50:25.289 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:26 localhost nova_compute[280168]: 2025-11-28 09:50:26.304 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:26 localhost nova_compute[280168]: 2025-11-28 09:50:26.305 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:26 localhost nova_compute[280168]: 2025-11-28 09:50:26.305 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:50:27 localhost openstack_network_exporter[240973]: ERROR 09:50:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:50:27 localhost openstack_network_exporter[240973]: ERROR 09:50:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:50:27 localhost openstack_network_exporter[240973]: ERROR 09:50:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:50:27 localhost openstack_network_exporter[240973]: ERROR 09:50:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:50:27 localhost openstack_network_exporter[240973]: Nov 28 04:50:27 localhost openstack_network_exporter[240973]: ERROR 09:50:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:50:27 localhost openstack_network_exporter[240973]: Nov 28 04:50:28 localhost nova_compute[280168]: 2025-11-28 09:50:28.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:28 localhost podman[239012]: time="2025-11-28T09:50:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:50:28 localhost podman[239012]: @ - - [28/Nov/2025:09:50:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152069 "" "Go-http-client/1.1" Nov 28 04:50:28 localhost podman[239012]: @ - - [28/Nov/2025:09:50:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18195 "" "Go-http-client/1.1" Nov 28 04:50:29 localhost nova_compute[280168]: 2025-11-28 09:50:29.235 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.367 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.368 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.368 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.385 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.386 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.386 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.387 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.387 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:50:30 localhost nova_compute[280168]: 2025-11-28 09:50:30.857 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:50:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:50:30 localhost podman[285722]: 2025-11-28 09:50:30.972410357 +0000 UTC m=+0.076453853 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc.) Nov 28 04:50:31 localhost podman[285722]: 2025-11-28 09:50:31.013558413 +0000 UTC m=+0.117601959 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 04:50:31 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.066 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.067 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12492MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.068 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.068 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.252 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.253 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.389 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.528 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.529 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.542 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.566 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:50:31 localhost nova_compute[280168]: 2025-11-28 09:50:31.587 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.049 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.055 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.079 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.082 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.082 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.953 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:32 localhost nova_compute[280168]: 2025-11-28 09:50:32.954 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:50:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:50:40 localhost podman[285820]: 2025-11-28 09:50:40.985317072 +0000 UTC m=+0.084464470 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:50:41 localhost podman[285819]: 2025-11-28 09:50:40.963477611 +0000 UTC m=+0.069209881 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 04:50:41 localhost podman[285827]: 2025-11-28 09:50:41.035198357 +0000 UTC m=+0.129061902 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:50:41 localhost podman[285827]: 2025-11-28 09:50:41.072448553 +0000 UTC m=+0.166312058 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:50:41 localhost podman[285821]: 2025-11-28 09:50:41.072555217 +0000 UTC m=+0.172162989 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:50:41 localhost podman[285821]: 2025-11-28 09:50:41.081273025 +0000 UTC m=+0.180880807 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 04:50:41 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:50:41 localhost podman[285819]: 2025-11-28 09:50:41.09966244 +0000 UTC m=+0.205394710 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3) Nov 28 04:50:41 localhost podman[285820]: 2025-11-28 09:50:41.099977689 +0000 UTC m=+0.199125167 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 04:50:41 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:50:41 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:50:41 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:50:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:50:44 localhost podman[285903]: 2025-11-28 09:50:44.982805717 +0000 UTC m=+0.084292734 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:50:45 localhost podman[285903]: 2025-11-28 09:50:45.019562239 +0000 UTC m=+0.121049266 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:50:45 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:50:46 localhost podman[286004]: Nov 28 04:50:46 localhost podman[286004]: 2025-11-28 09:50:46.172114918 +0000 UTC m=+0.058061418 container create cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_franklin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., release=553, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public) Nov 28 04:50:46 localhost systemd[1]: Started libpod-conmon-cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf.scope. Nov 28 04:50:46 localhost systemd[1]: Started libcrun container. Nov 28 04:50:46 localhost podman[286004]: 2025-11-28 09:50:46.139949508 +0000 UTC m=+0.025895978 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:50:46 localhost podman[286004]: 2025-11-28 09:50:46.239730338 +0000 UTC m=+0.125676788 container init cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_franklin, GIT_BRANCH=main, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=553, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True) Nov 28 04:50:46 localhost systemd[1]: tmp-crun.bKAriY.mount: Deactivated successfully. Nov 28 04:50:46 localhost podman[286004]: 2025-11-28 09:50:46.251657395 +0000 UTC m=+0.137603835 container start cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_franklin, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux , release=553, io.openshift.expose-services=, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, build-date=2025-09-24T08:57:55, architecture=x86_64) Nov 28 04:50:46 localhost podman[286004]: 2025-11-28 09:50:46.251952243 +0000 UTC m=+0.137898683 container attach cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_franklin, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7) Nov 28 04:50:46 localhost happy_franklin[286019]: 167 167 Nov 28 04:50:46 localhost systemd[1]: libpod-cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf.scope: Deactivated successfully. Nov 28 04:50:46 localhost podman[286004]: 2025-11-28 09:50:46.25832959 +0000 UTC m=+0.144276070 container died cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_franklin, ceph=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:50:46 localhost podman[286024]: 2025-11-28 09:50:46.339122576 +0000 UTC m=+0.071436439 container remove cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_franklin, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:50:46 localhost systemd[1]: libpod-conmon-cced2c85b354e0bf58917b909c50dac32447226932b3de68393bcfec839cb5cf.scope: Deactivated successfully. Nov 28 04:50:46 localhost systemd[1]: Reloading. Nov 28 04:50:46 localhost systemd-rc-local-generator[286062]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:50:46 localhost systemd-sysv-generator[286069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: var-lib-containers-storage-overlay-ca2e8ad921ba0d14b62f1213457bc3a508f256cebd9b246f8c28651d5dad8283-merged.mount: Deactivated successfully. Nov 28 04:50:46 localhost systemd[1]: Reloading. Nov 28 04:50:46 localhost systemd-rc-local-generator[286108]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:50:46 localhost systemd-sysv-generator[286112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:46 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:50:47 localhost systemd[1]: Starting Ceph mgr.np0005538515.yfkzhl for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 04:50:47 localhost podman[286170]: Nov 28 04:50:47 localhost podman[286170]: 2025-11-28 09:50:47.405105911 +0000 UTC m=+0.058400328 container create 351e4a94ab289bce7b4a85395ca5ce7cc15d9e39651879fa3b3c91ad3ed9ba78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, version=7, ceph=True, architecture=x86_64) Nov 28 04:50:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff37e480af970500c0eb217506b25b6534f9ae18ce1d3cf6ced4b6b59bce95f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 04:50:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff37e480af970500c0eb217506b25b6534f9ae18ce1d3cf6ced4b6b59bce95f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:50:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff37e480af970500c0eb217506b25b6534f9ae18ce1d3cf6ced4b6b59bce95f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 04:50:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ff37e480af970500c0eb217506b25b6534f9ae18ce1d3cf6ced4b6b59bce95f/merged/var/lib/ceph/mgr/ceph-np0005538515.yfkzhl supports timestamps until 2038 (0x7fffffff) Nov 28 04:50:47 localhost podman[286170]: 2025-11-28 09:50:47.464519019 +0000 UTC m=+0.117813436 container init 351e4a94ab289bce7b4a85395ca5ce7cc15d9e39651879fa3b3c91ad3ed9ba78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, RELEASE=main, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, distribution-scope=public, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:50:47 localhost podman[286170]: 2025-11-28 09:50:47.471990539 +0000 UTC m=+0.125284936 container start 351e4a94ab289bce7b4a85395ca5ce7cc15d9e39651879fa3b3c91ad3ed9ba78 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl, io.buildah.version=1.33.12, io.openshift.expose-services=, version=7, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, com.redhat.component=rhceph-container, release=553, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc.) Nov 28 04:50:47 localhost bash[286170]: 351e4a94ab289bce7b4a85395ca5ce7cc15d9e39651879fa3b3c91ad3ed9ba78 Nov 28 04:50:47 localhost podman[286170]: 2025-11-28 09:50:47.379870755 +0000 UTC m=+0.033165222 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:50:47 localhost systemd[1]: Started Ceph mgr.np0005538515.yfkzhl for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 04:50:47 localhost ceph-mgr[286188]: set uid:gid to 167:167 (ceph:ceph) Nov 28 04:50:47 localhost ceph-mgr[286188]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Nov 28 04:50:47 localhost ceph-mgr[286188]: pidfile_write: ignore empty --pid-file Nov 28 04:50:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'alerts' Nov 28 04:50:47 localhost ceph-mgr[286188]: mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 28 04:50:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'balancer' Nov 28 04:50:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:47.662+0000 7f3293659140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 28 04:50:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:47.727+0000 7f3293659140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 28 04:50:47 localhost ceph-mgr[286188]: mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 28 04:50:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'cephadm' Nov 28 04:50:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'crash' Nov 28 04:50:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:48.378+0000 7f3293659140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Nov 28 04:50:48 localhost ceph-mgr[286188]: mgr[py] Module crash has missing NOTIFY_TYPES member Nov 28 04:50:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'dashboard' Nov 28 04:50:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'devicehealth' Nov 28 04:50:48 localhost ceph-mgr[286188]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 28 04:50:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'diskprediction_local' Nov 28 04:50:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:48.940+0000 7f3293659140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost systemd[1]: tmp-crun.ziKcZ3.mount: Deactivated successfully. Nov 28 04:50:49 localhost podman[286342]: 2025-11-28 09:50:49.021561473 +0000 UTC m=+0.095693635 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, distribution-scope=public, RELEASE=main, name=rhceph, release=553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64) Nov 28 04:50:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Nov 28 04:50:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Nov 28 04:50:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: from numpy import show_config as show_numpy_config Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:49.085+0000 7f3293659140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'influx' Nov 28 04:50:49 localhost podman[286342]: 2025-11-28 09:50:49.125706927 +0000 UTC m=+0.199839109 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, RELEASE=main, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Module influx has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:49.147+0000 7f3293659140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'insights' Nov 28 04:50:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'iostat' Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'k8sevents' Nov 28 04:50:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:49.266+0000 7f3293659140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 28 04:50:49 localhost podman[286375]: 2025-11-28 09:50:49.26979061 +0000 UTC m=+0.085360438 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 04:50:49 localhost podman[286375]: 2025-11-28 09:50:49.27887972 +0000 UTC m=+0.094449558 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:50:49 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'localpool' Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'mds_autoscaler' Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'mirroring' Nov 28 04:50:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'nfs' Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.026+0000 7f3293659140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'orchestrator' Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'osd_perf_query' Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.182+0000 7f3293659140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'osd_support' Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.250+0000 7f3293659140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.305+0000 7f3293659140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'pg_autoscaler' Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.370+0000 7f3293659140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'progress' Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module progress has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.428+0000 7f3293659140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'prometheus' Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.733+0000 7f3293659140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'rbd_support' Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:50.831+0000 7f3293659140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 28 04:50:50 localhost ceph-mgr[286188]: mgr[py] Loading python module 'restful' Nov 28 04:50:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:50:50.832 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:50:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:50:50.833 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:50:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:50:50.833 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'rgw' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:51.195+0000 7f3293659140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'rook' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Module rook has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:51.649+0000 7f3293659140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'selftest' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:51.714+0000 7f3293659140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'snap_schedule' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'stats' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'status' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Module status has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:51.926+0000 7f3293659140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'telegraf' Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:51.992+0000 7f3293659140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 28 04:50:51 localhost ceph-mgr[286188]: mgr[py] Loading python module 'telemetry' Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:52.138+0000 7f3293659140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Loading python module 'test_orchestrator' Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:52.296+0000 7f3293659140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Loading python module 'volumes' Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:52.507+0000 7f3293659140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Loading python module 'zabbix' Nov 28 04:50:52 localhost ceph-mgr[286188]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:50:52.574+0000 7f3293659140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 28 04:50:52 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c51e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 28 04:50:52 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.103:6800/705940825 Nov 28 04:50:57 localhost openstack_network_exporter[240973]: ERROR 09:50:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:50:57 localhost openstack_network_exporter[240973]: ERROR 09:50:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:50:57 localhost openstack_network_exporter[240973]: ERROR 09:50:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:50:57 localhost openstack_network_exporter[240973]: ERROR 09:50:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:50:57 localhost openstack_network_exporter[240973]: Nov 28 04:50:57 localhost openstack_network_exporter[240973]: ERROR 09:50:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:50:57 localhost openstack_network_exporter[240973]: Nov 28 04:50:58 localhost podman[239012]: time="2025-11-28T09:50:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:50:58 localhost podman[239012]: @ - - [28/Nov/2025:09:50:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154135 "" "Go-http-client/1.1" Nov 28 04:50:58 localhost podman[239012]: @ - - [28/Nov/2025:09:50:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18682 "" "Go-http-client/1.1" Nov 28 04:51:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:51:01 localhost podman[286960]: 2025-11-28 09:51:01.1531501 +0000 UTC m=+0.089050251 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.expose-services=, vcs-type=git, config_id=edpm) Nov 28 04:51:01 localhost podman[286960]: 2025-11-28 09:51:01.171507964 +0000 UTC m=+0.107408165 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal) Nov 28 04:51:01 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:51:03 localhost podman[287361]: Nov 28 04:51:03 localhost podman[287361]: 2025-11-28 09:51:03.121060243 +0000 UTC m=+0.071066577 container create d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_kilby, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, release=553, description=Red Hat Ceph Storage 7, name=rhceph) Nov 28 04:51:03 localhost systemd[1]: Started libpod-conmon-d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051.scope. Nov 28 04:51:03 localhost systemd[1]: Started libcrun container. Nov 28 04:51:03 localhost podman[287361]: 2025-11-28 09:51:03.088400748 +0000 UTC m=+0.038407152 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:51:03 localhost podman[287361]: 2025-11-28 09:51:03.197627709 +0000 UTC m=+0.147634043 container init d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_kilby, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True, GIT_CLEAN=True, architecture=x86_64, build-date=2025-09-24T08:57:55, vcs-type=git, release=553, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux ) Nov 28 04:51:03 localhost podman[287361]: 2025-11-28 09:51:03.20740162 +0000 UTC m=+0.157407954 container start d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_kilby, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, distribution-scope=public, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:51:03 localhost podman[287361]: 2025-11-28 09:51:03.207756661 +0000 UTC m=+0.157763055 container attach d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_kilby, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:51:03 localhost awesome_kilby[287375]: 167 167 Nov 28 04:51:03 localhost systemd[1]: libpod-d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051.scope: Deactivated successfully. Nov 28 04:51:03 localhost podman[287361]: 2025-11-28 09:51:03.211513236 +0000 UTC m=+0.161519600 container died d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_kilby, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, maintainer=Guillaume Abrioux , version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, distribution-scope=public, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main) Nov 28 04:51:03 localhost podman[287381]: 2025-11-28 09:51:03.314865176 +0000 UTC m=+0.089346540 container remove d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_kilby, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, release=553, maintainer=Guillaume Abrioux , RELEASE=main, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:51:03 localhost systemd[1]: libpod-conmon-d4ec5f80a73c7abebbf553cecace275fd9fa303dc6d22eb9379aeb94aa3a0051.scope: Deactivated successfully. Nov 28 04:51:03 localhost podman[287402]: Nov 28 04:51:03 localhost podman[287402]: 2025-11-28 09:51:03.436639043 +0000 UTC m=+0.080186189 container create ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_volhard, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.expose-services=, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.) Nov 28 04:51:03 localhost systemd[1]: Started libpod-conmon-ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c.scope. Nov 28 04:51:03 localhost systemd[1]: Started libcrun container. Nov 28 04:51:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0717618bce4df3df6ee50208187239f957ee9079bb4439469d3e46b86ed9a1a/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0717618bce4df3df6ee50208187239f957ee9079bb4439469d3e46b86ed9a1a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0717618bce4df3df6ee50208187239f957ee9079bb4439469d3e46b86ed9a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0717618bce4df3df6ee50208187239f957ee9079bb4439469d3e46b86ed9a1a/merged/var/lib/ceph/mon/ceph-np0005538515 supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:03 localhost podman[287402]: 2025-11-28 09:51:03.404462182 +0000 UTC m=+0.048009348 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:51:03 localhost podman[287402]: 2025-11-28 09:51:03.503980545 +0000 UTC m=+0.147527711 container init ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_volhard, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True) Nov 28 04:51:03 localhost podman[287402]: 2025-11-28 09:51:03.510835435 +0000 UTC m=+0.154382581 container start ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_volhard, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, version=7, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=) Nov 28 04:51:03 localhost podman[287402]: 2025-11-28 09:51:03.511017211 +0000 UTC m=+0.154564367 container attach ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_volhard, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, version=7) Nov 28 04:51:03 localhost systemd[1]: libpod-ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c.scope: Deactivated successfully. Nov 28 04:51:03 localhost podman[287402]: 2025-11-28 09:51:03.620053165 +0000 UTC m=+0.263600311 container died ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_volhard, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, distribution-scope=public, version=7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=) Nov 28 04:51:03 localhost podman[287443]: 2025-11-28 09:51:03.704766371 +0000 UTC m=+0.072785489 container remove ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_volhard, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, ceph=True) Nov 28 04:51:03 localhost systemd[1]: libpod-conmon-ab397cc5e5dfb411d1fdb02c41d1bdca80fd7aff9a4f89ed6bf65d8ab7cc8b1c.scope: Deactivated successfully. Nov 28 04:51:03 localhost systemd[1]: Reloading. Nov 28 04:51:03 localhost systemd-rc-local-generator[287482]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:51:03 localhost systemd-sysv-generator[287489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:03 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: var-lib-containers-storage-overlay-9261e114a82f80fca245808093d9f3507d752ead7cc93194adad348e3647f68c-merged.mount: Deactivated successfully. Nov 28 04:51:04 localhost systemd[1]: Reloading. Nov 28 04:51:04 localhost systemd-sysv-generator[287528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:51:04 localhost systemd-rc-local-generator[287525]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:51:04 localhost systemd[1]: Starting Ceph mon.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 04:51:04 localhost podman[287586]: Nov 28 04:51:05 localhost podman[287586]: 2025-11-28 09:51:04.913765717 +0000 UTC m=+0.051884326 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:51:05 localhost podman[287586]: 2025-11-28 09:51:05.333509711 +0000 UTC m=+0.471628300 container create a20fbb4af4b220d896878368414f00f458b36bd01f689cea18d6929c00ea38cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , name=rhceph, vcs-type=git, version=7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:51:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c7a1767f0a2009eee63539253f3f9f0fbc337fc72bb39a68aab86969f5acee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c7a1767f0a2009eee63539253f3f9f0fbc337fc72bb39a68aab86969f5acee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c7a1767f0a2009eee63539253f3f9f0fbc337fc72bb39a68aab86969f5acee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04c7a1767f0a2009eee63539253f3f9f0fbc337fc72bb39a68aab86969f5acee/merged/var/lib/ceph/mon/ceph-np0005538515 supports timestamps until 2038 (0x7fffffff) Nov 28 04:51:05 localhost podman[287586]: 2025-11-28 09:51:05.391343551 +0000 UTC m=+0.529462130 container init a20fbb4af4b220d896878368414f00f458b36bd01f689cea18d6929c00ea38cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:51:05 localhost podman[287586]: 2025-11-28 09:51:05.400614646 +0000 UTC m=+0.538733255 container start a20fbb4af4b220d896878368414f00f458b36bd01f689cea18d6929c00ea38cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:51:05 localhost bash[287586]: a20fbb4af4b220d896878368414f00f458b36bd01f689cea18d6929c00ea38cf Nov 28 04:51:05 localhost systemd[1]: Started Ceph mon.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 04:51:05 localhost ceph-mon[287604]: set uid:gid to 167:167 (ceph:ceph) Nov 28 04:51:05 localhost ceph-mon[287604]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Nov 28 04:51:05 localhost ceph-mon[287604]: pidfile_write: ignore empty --pid-file Nov 28 04:51:05 localhost ceph-mon[287604]: load: jerasure load: lrc Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: RocksDB version: 7.9.2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Git sha 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: DB SUMMARY Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: DB Session ID: 18KD68ISQNH5R0YWI96C Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: CURRENT file: CURRENT Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: IDENTITY file: IDENTITY Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005538515/store.db dir, Total Num: 0, files: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005538515/store.db: 000004.log size: 761 ; Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.error_if_exists: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.create_if_missing: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.paranoid_checks: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.env: 0x5609e0af29e0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.fs: PosixFileSystem Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.info_log: 0x5609e14a4d20 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_file_opening_threads: 16 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.statistics: (nil) Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.use_fsync: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_log_file_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.log_file_time_to_roll: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.keep_log_file_num: 1000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.recycle_log_file_num: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.allow_fallocate: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.allow_mmap_reads: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.allow_mmap_writes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.use_direct_reads: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.create_missing_column_families: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.db_log_dir: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.wal_dir: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.table_cache_numshardbits: 6 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.advise_random_on_open: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.db_write_buffer_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.write_buffer_manager: 0x5609e14b5540 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.use_adaptive_mutex: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.rate_limiter: (nil) Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.wal_recovery_mode: 2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.enable_thread_tracking: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.enable_pipelined_write: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.unordered_write: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.row_cache: None Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.wal_filter: None Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.allow_ingest_behind: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.two_write_queues: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.manual_wal_flush: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.wal_compression: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.atomic_flush: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.persist_stats_to_disk: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.log_readahead_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.best_efforts_recovery: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.allow_data_in_errors: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.db_host_id: __hostname__ Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.enforce_single_del_contracts: true Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_background_jobs: 2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_background_compactions: -1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_subcompactions: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.delayed_write_rate : 16777216 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_total_wal_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.stats_dump_period_sec: 600 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.stats_persist_period_sec: 600 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_open_files: -1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bytes_per_sync: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_readahead_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_background_flushes: -1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Compression algorithms supported: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kZSTD supported: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kXpressCompression supported: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kBZip2Compression supported: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kLZ4Compression supported: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kZlibCompression supported: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: #011kSnappyCompression supported: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: DMutex implementation: pthread_mutex_t Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005538515/store.db/MANIFEST-000005 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.merge_operator: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_filter: None Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_filter_factory: None Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.sst_partitioner_factory: None Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5609e14a4980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x5609e14a1350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.write_buffer_size: 33554432 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_write_buffer_number: 2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression: NoCompression Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression: Disabled Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.prefix_extractor: nullptr Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.num_levels: 7 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.level: 32767 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.enabled: false Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_base: 268435456 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.arena_block_size: 1048576 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.table_properties_collectors: Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.inplace_update_support: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.bloom_locality: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.max_successive_merges: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.force_consistency_checks: 1 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.ttl: 2592000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.enable_blob_files: false Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.min_blob_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.blob_file_size: 268435456 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005538515/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 5fedd929-5f7c-4f1d-86e7-c95af9bc6d32 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323465453140, "job": 1, "event": "recovery_started", "wal_files": [4]} Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323465455933, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323465456143, "job": 1, "event": "recovery_finished"} Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5609e14c8e00 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: DB pointer 0x5609e15be000 Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:51:05 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.84 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Sum 1/0 1.84 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.6 0.00 0.00 1 0.003 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5609e14a1350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515 does not exist in monmap, will attempt to join an existing cluster Nov 28 04:51:05 localhost ceph-mon[287604]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] Nov 28 04:51:05 localhost ceph-mon[287604]: starting mon.np0005538515 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005538515 fsid 2c5417c9-00eb-57d5-a565-ddecbc7995c1 Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(???) e0 preinit fsid 2c5417c9-00eb-57d5-a565-ddecbc7995c1 Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing) e3 sync_obtain_latest_monmap Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3 Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).mds e17 new map Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).mds e17 print_map#012e17#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-28T08:07:30.958224+0000#012modified#0112025-11-28T09:49:53.259185+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01183#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26449}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26449 members: 26449#012[mds.mds.np0005538514.umgtoy{0:26449} state up:active seq 12 addr [v2:172.18.0.107:6808/1969410151,v1:172.18.0.107:6809/1969410151] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005538513.yljthc{-1:16968} state up:standby seq 1 addr [v2:172.18.0.106:6808/2782735008,v1:172.18.0.106:6809/2782735008] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005538515.anvatb{-1:26446} state up:standby seq 1 addr [v2:172.18.0.108:6808/2640180,v1:172.18.0.108:6809/2640180] compat {c=[1],r=[1],i=[17ff]}] Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mgr to host np0005538513.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mgr to host np0005538514.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mgr to host np0005538515.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Nov 28 04:51:05 localhost ceph-mon[287604]: Saving service mgr spec with placement label:mgr Nov 28 04:51:05 localhost ceph-mon[287604]: Deploying daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Nov 28 04:51:05 localhost ceph-mon[287604]: Deploying daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mon to host np0005538510.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: Added label _admin to host np0005538510.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished Nov 28 04:51:05 localhost ceph-mon[287604]: Deploying daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mon to host np0005538511.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label _admin to host np0005538511.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mon to host np0005538512.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label _admin to host np0005538512.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mon to host np0005538513.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: Added label _admin to host np0005538513.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mon to host np0005538514.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label _admin to host np0005538514.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label mon to host np0005538515.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Added label _admin to host np0005538515.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:05 localhost ceph-mon[287604]: Saving service mon spec with placement label:mon Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:05 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:05 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:05 localhost ceph-mon[287604]: Deploying daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:51:05 localhost ceph-mon[287604]: mon.np0005538515@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3 Nov 28 04:51:05 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c51e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 28 04:51:07 localhost ceph-mon[287604]: mon.np0005538515@-1(probing) e4 my rank is now 3 (was -1) Nov 28 04:51:07 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:51:07 localhost ceph-mon[287604]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 Nov 28 04:51:07 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:07 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Nov 28 04:51:08 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Nov 28 04:51:10 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Nov 28 04:51:10 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:10 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Nov 28 04:51:10 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Nov 28 04:51:10 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e4 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:10 localhost ceph-mon[287604]: mgrc update_daemon_metadata mon.np0005538515 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005538515.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005538515.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Nov 28 04:51:11 localhost ceph-mon[287604]: Deploying daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:51:11 localhost ceph-mon[287604]: mon.np0005538510 calling monitor election Nov 28 04:51:11 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:51:11 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:51:11 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:51:11 localhost ceph-mon[287604]: mon.np0005538510 is new leader, mons np0005538510,np0005538512,np0005538511,np0005538515 in quorum (ranks 0,1,2,3) Nov 28 04:51:11 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:51:11 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:11 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:51:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:51:12 localhost systemd[1]: tmp-crun.C9GUYp.mount: Deactivated successfully. Nov 28 04:51:12 localhost podman[287644]: 2025-11-28 09:51:12.012720453 +0000 UTC m=+0.113831473 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute) Nov 28 04:51:12 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:12 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:12 localhost ceph-mon[287604]: Deploying daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:51:12 localhost podman[287646]: 2025-11-28 09:51:12.052787555 +0000 UTC m=+0.149431558 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:51:12 localhost podman[287646]: 2025-11-28 09:51:12.087614827 +0000 UTC m=+0.184258900 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 04:51:12 localhost podman[287647]: 2025-11-28 09:51:12.098673557 +0000 UTC m=+0.189933584 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:51:12 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:51:12 localhost podman[287647]: 2025-11-28 09:51:12.108723506 +0000 UTC m=+0.199983563 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:51:12 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:51:12 localhost podman[287645]: 2025-11-28 09:51:12.20731715 +0000 UTC m=+0.305355835 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:51:12 localhost podman[287644]: 2025-11-28 09:51:12.230518314 +0000 UTC m=+0.331629314 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm) Nov 28 04:51:12 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:51:12 localhost podman[287645]: 2025-11-28 09:51:12.277613423 +0000 UTC m=+0.375652108 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:51:12 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:51:12 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e4 adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints Nov 28 04:51:12 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c4f20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 28 04:51:12 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:51:12 localhost ceph-mon[287604]: paxos.3).electionLogic(18) init, last seen epoch 18 Nov 28 04:51:12 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:12 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:13 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Nov 28 04:51:13 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Nov 28 04:51:13 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Nov 28 04:51:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:51:15 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Nov 28 04:51:15 localhost podman[287727]: 2025-11-28 09:51:15.976377667 +0000 UTC m=+0.078739543 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:51:15 localhost podman[287727]: 2025-11-28 09:51:15.98977974 +0000 UTC m=+0.092141656 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:51:16 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:51:17 localhost ceph-mds[282859]: mds.beacon.mds.np0005538515.anvatb missed beacon ack from the monitors Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e5 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538510 calling monitor election Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538510 is new leader, mons np0005538510,np0005538512,np0005538511,np0005538515,np0005538514 in quorum (ranks 0,1,2,3,4) Nov 28 04:51:17 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:51:17 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e5 handle_auth_request failed to assign global_id Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e5 handle_auth_request failed to assign global_id Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e5 adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints Nov 28 04:51:17 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c5600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 28 04:51:17 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:51:17 localhost ceph-mon[287604]: paxos.3).electionLogic(22) init, last seen epoch 22 Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:17 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:19 localhost systemd[1]: tmp-crun.s9vsoH.mount: Deactivated successfully. Nov 28 04:51:19 localhost podman[287874]: 2025-11-28 09:51:19.098254004 +0000 UTC m=+0.131659002 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, build-date=2025-09-24T08:57:55, RELEASE=main, release=553, GIT_CLEAN=True, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:51:19 localhost podman[287874]: 2025-11-28 09:51:19.224775637 +0000 UTC m=+0.258180645 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Nov 28 04:51:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:51:19 localhost podman[287925]: 2025-11-28 09:51:19.470670711 +0000 UTC m=+0.089398471 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:51:19 localhost podman[287925]: 2025-11-28 09:51:19.487587382 +0000 UTC m=+0.106315202 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:51:19 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:51:22 localhost ceph-mon[287604]: mon.np0005538515@3(electing) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:22 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e6 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538510 calling monitor election Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:51:23 localhost ceph-mon[287604]: mon.np0005538510 is new leader, mons np0005538510,np0005538512,np0005538511,np0005538515,np0005538514,np0005538513 in quorum (ranks 0,1,2,3,4,5) Nov 28 04:51:23 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:51:23 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:23 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "config rm", "who": "osd/host:np0005538510", "name": "osd_memory_target"} : dispatch Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:24 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:24 localhost ceph-mon[287604]: Updating np0005538510.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:24 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:24 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:24 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:24 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:24 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: Updating np0005538510.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:25 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:26 localhost ceph-mon[287604]: Reconfiguring mon.np0005538510 (monmap changed)... Nov 28 04:51:26 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538510 on np0005538510.localdomain Nov 28 04:51:26 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:26 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:26 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538510.nzitwz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:27 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538510.nzitwz (monmap changed)... Nov 28 04:51:27 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538510.nzitwz on np0005538510.localdomain Nov 28 04:51:27 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:27 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:27 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538510.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:27 localhost openstack_network_exporter[240973]: ERROR 09:51:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:51:27 localhost openstack_network_exporter[240973]: ERROR 09:51:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:51:27 localhost openstack_network_exporter[240973]: ERROR 09:51:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:51:27 localhost openstack_network_exporter[240973]: ERROR 09:51:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:51:27 localhost openstack_network_exporter[240973]: Nov 28 04:51:27 localhost openstack_network_exporter[240973]: ERROR 09:51:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:51:27 localhost openstack_network_exporter[240973]: Nov 28 04:51:28 localhost nova_compute[280168]: 2025-11-28 09:51:28.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:28 localhost nova_compute[280168]: 2025-11-28 09:51:28.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:28 localhost nova_compute[280168]: 2025-11-28 09:51:28.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:28 localhost nova_compute[280168]: 2025-11-28 09:51:28.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:51:28 localhost ceph-mon[287604]: Reconfiguring crash.np0005538510 (monmap changed)... Nov 28 04:51:28 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538510 on np0005538510.localdomain Nov 28 04:51:28 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:28 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:28 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:28 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:28 localhost podman[239012]: time="2025-11-28T09:51:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:51:28 localhost podman[239012]: @ - - [28/Nov/2025:09:51:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:51:28 localhost podman[239012]: @ - - [28/Nov/2025:09:51:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19157 "" "Go-http-client/1.1" Nov 28 04:51:29 localhost ceph-mon[287604]: Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:51:29 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:51:29 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:29 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:29 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:30 localhost nova_compute[280168]: 2025-11-28 09:51:30.235 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:30 localhost nova_compute[280168]: 2025-11-28 09:51:30.259 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:30 localhost nova_compute[280168]: 2025-11-28 09:51:30.260 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:51:30 localhost nova_compute[280168]: 2025-11-28 09:51:30.260 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:51:30 localhost nova_compute[280168]: 2025-11-28 09:51:30.280 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:51:30 localhost ceph-mon[287604]: Reconfiguring mon.np0005538511 (monmap changed)... Nov 28 04:51:30 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538511 on np0005538511.localdomain Nov 28 04:51:30 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:30 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:30 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:51:30 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:30 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:51:31 localhost nova_compute[280168]: 2025-11-28 09:51:31.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:31 localhost nova_compute[280168]: 2025-11-28 09:51:31.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:31 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:31 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:31 localhost ceph-mon[287604]: Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:51:31 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:31 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:51:31 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:31 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' Nov 28 04:51:31 localhost ceph-mon[287604]: from='mgr.14120 172.18.0.103:0/3259224084' entity='mgr.np0005538510.nzitwz' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:51:31 localhost podman[288348]: 2025-11-28 09:51:31.967786834 +0000 UTC m=+0.073082019 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6) Nov 28 04:51:32 localhost podman[288348]: 2025-11-28 09:51:32.011769648 +0000 UTC m=+0.117064903 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 28 04:51:32 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:51:32 localhost ceph-mon[287604]: mon.np0005538515@3(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Nov 28 04:51:32 localhost ceph-mon[287604]: mon.np0005538515@3(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Nov 28 04:51:32 localhost ceph-mon[287604]: mon.np0005538515@3(peon).osd e86 e86: 6 total, 6 up, 6 in Nov 28 04:51:32 localhost systemd[1]: session-23.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-18.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-24.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-19.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-16.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-21.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd-logind[763]: Session 23 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 21 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 19 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 24 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 16 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 18 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd[1]: session-25.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-14.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd-logind[763]: Session 25 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 14 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd[1]: session-17.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd-logind[763]: Session 17 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd[1]: session-22.scope: Deactivated successfully. Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:32 localhost systemd[1]: session-20.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd[1]: session-26.scope: Deactivated successfully. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 23. Nov 28 04:51:32 localhost systemd[1]: session-26.scope: Consumed 3min 30.382s CPU time. Nov 28 04:51:32 localhost systemd-logind[763]: Session 22 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 20 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Session 26 logged out. Waiting for processes to exit. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 18. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 24. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 19. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 16. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 21. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 25. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 14. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 17. Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 22. Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.265 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.266 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 20. Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.266 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:51:32 localhost systemd-logind[763]: Removed session 26. Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.266 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.267 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:51:32 localhost sshd[288388]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:51:32 localhost systemd-logind[763]: New session 64 of user ceph-admin. Nov 28 04:51:32 localhost systemd[1]: Started Session 64 of User ceph-admin. Nov 28 04:51:32 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:51:32 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1096407890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.709 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:51:32 localhost ceph-mon[287604]: from='client.? 172.18.0.103:0/3703486687' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:51:32 localhost ceph-mon[287604]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:51:32 localhost ceph-mon[287604]: Activating manager daemon np0005538512.zyhkxs Nov 28 04:51:32 localhost ceph-mon[287604]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 28 04:51:32 localhost ceph-mon[287604]: Manager daemon np0005538512.zyhkxs is now available Nov 28 04:51:32 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538512.zyhkxs/mirror_snapshot_schedule"} : dispatch Nov 28 04:51:32 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538512.zyhkxs/mirror_snapshot_schedule"} : dispatch Nov 28 04:51:32 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538512.zyhkxs/trash_purge_schedule"} : dispatch Nov 28 04:51:32 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538512.zyhkxs/trash_purge_schedule"} : dispatch Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.896 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.897 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12043MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.897 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.897 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.966 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.967 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:51:32 localhost nova_compute[280168]: 2025-11-28 09:51:32.989 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:51:33 localhost nova_compute[280168]: 2025-11-28 09:51:33.454 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:51:33 localhost nova_compute[280168]: 2025-11-28 09:51:33.461 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:51:33 localhost nova_compute[280168]: 2025-11-28 09:51:33.491 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:51:33 localhost nova_compute[280168]: 2025-11-28 09:51:33.494 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:51:33 localhost nova_compute[280168]: 2025-11-28 09:51:33.495 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:51:33 localhost systemd[1]: tmp-crun.Bbt5CA.mount: Deactivated successfully. Nov 28 04:51:33 localhost podman[288525]: 2025-11-28 09:51:33.733034524 +0000 UTC m=+0.107460148 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, GIT_CLEAN=True) Nov 28 04:51:33 localhost podman[288525]: 2025-11-28 09:51:33.850004692 +0000 UTC m=+0.224430336 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, RELEASE=main, name=rhceph) Nov 28 04:51:34 localhost nova_compute[280168]: 2025-11-28 09:51:34.496 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:34 localhost nova_compute[280168]: 2025-11-28 09:51:34.496 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:51:35 localhost ceph-mon[287604]: [28/Nov/2025:09:51:33] ENGINE Bus STARTING Nov 28 04:51:35 localhost ceph-mon[287604]: [28/Nov/2025:09:51:33] ENGINE Serving on https://172.18.0.105:7150 Nov 28 04:51:35 localhost ceph-mon[287604]: [28/Nov/2025:09:51:33] ENGINE Client ('172.18.0.105', 40464) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:51:35 localhost ceph-mon[287604]: [28/Nov/2025:09:51:33] ENGINE Serving on http://172.18.0.105:8765 Nov 28 04:51:35 localhost ceph-mon[287604]: [28/Nov/2025:09:51:33] ENGINE Bus STARTED Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:35 localhost ceph-mon[287604]: mon.np0005538515@3(peon).osd e86 _set_new_cache_sizes cache_size:1019548993 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:51:37 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:51:37 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:51:37 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:51:37 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:51:37 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd/host:np0005538510", "name": "osd_memory_target"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:51:37 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "config rm", "who": "osd/host:np0005538510", "name": "osd_memory_target"} : dispatch Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538510.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538510.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:39 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:51:40 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:40 localhost ceph-mon[287604]: Updating np0005538510.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:40 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:40 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:40 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:40 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:51:40 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:40 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:40 localhost ceph-mon[287604]: mon.np0005538515@3(peon).osd e86 _set_new_cache_sizes cache_size:1020041534 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:51:41 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:41 localhost ceph-mon[287604]: Updating np0005538510.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:41 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:41 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:41 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:41 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:41 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:42 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:51:42 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:51:42 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:42 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:42 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:42 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:51:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:51:43 localhost podman[289424]: 2025-11-28 09:51:43.005029884 +0000 UTC m=+0.100597196 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:51:43 localhost podman[289423]: 2025-11-28 09:51:43.056002413 +0000 UTC m=+0.152180994 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:51:43 localhost podman[289424]: 2025-11-28 09:51:43.07347607 +0000 UTC m=+0.169043362 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 04:51:43 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:51:43 localhost podman[289423]: 2025-11-28 09:51:43.142817273 +0000 UTC m=+0.238995944 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:51:43 localhost systemd[1]: tmp-crun.Dx7RCO.mount: Deactivated successfully. Nov 28 04:51:43 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:51:43 localhost podman[289426]: 2025-11-28 09:51:43.163555801 +0000 UTC m=+0.252779947 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:51:43 localhost podman[289426]: 2025-11-28 09:51:43.197113774 +0000 UTC m=+0.286337900 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:51:43 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:51:43 localhost podman[289425]: 2025-11-28 09:51:43.217778129 +0000 UTC m=+0.311790603 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent) Nov 28 04:51:43 localhost podman[289425]: 2025-11-28 09:51:43.248922477 +0000 UTC m=+0.342934951 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:51:43 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:51:43 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:51:43 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:51:43 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:43 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:43 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:43 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:43 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:44 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:51:44 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:51:44 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:44 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:51:44 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:45 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:51:45 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:51:45 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:45 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:51:45 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:45 localhost ceph-mon[287604]: mon.np0005538515@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054386 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:51:46 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:51:46 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:51:46 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:46 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:51:46 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:46 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:51:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:51:46 localhost podman[289506]: 2025-11-28 09:51:46.982874085 +0000 UTC m=+0.089707771 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:51:46 localhost podman[289506]: 2025-11-28 09:51:46.990231672 +0000 UTC m=+0.097065368 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:51:47 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:51:47 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:51:47 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:51:47 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:47 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:47 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:47 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:51:47 localhost ceph-mon[287604]: from='mgr.14190 ' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:47 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:51:48 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c5600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0 Nov 28 04:51:48 localhost ceph-mon[287604]: mon.np0005538515@3(peon) e7 my rank is now 2 (was 3) Nov 28 04:51:48 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:51:48 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:51:48 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c4f20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Nov 28 04:51:48 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:51:48 localhost ceph-mon[287604]: paxos.2).electionLogic(26) init, last seen epoch 26 Nov 28 04:51:48 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:48 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:51:49 localhost systemd[1]: tmp-crun.ooRVKE.mount: Deactivated successfully. Nov 28 04:51:50 localhost podman[289532]: 2025-11-28 09:51:50.018321943 +0000 UTC m=+0.125606246 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, tcib_managed=true, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:51:50 localhost podman[289532]: 2025-11-28 09:51:50.059650844 +0000 UTC m=+0.166935127 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 04:51:50 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:51:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:51:50.833 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:51:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:51:50.834 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:51:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:51:50.834 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:51:53 localhost ceph-mds[282859]: mds.beacon.mds.np0005538515.anvatb missed beacon ack from the monitors Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:53 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:53 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:51:53 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "mon rm", "name": "np0005538510"} : dispatch Nov 28 04:51:53 localhost ceph-mon[287604]: Remove daemons mon.np0005538510 Nov 28 04:51:53 localhost ceph-mon[287604]: Safe to remove mon.np0005538510: new quorum should be ['np0005538512', 'np0005538511', 'np0005538515', 'np0005538514', 'np0005538513'] (from ['np0005538512', 'np0005538511', 'np0005538515', 'np0005538514', 'np0005538513']) Nov 28 04:51:53 localhost ceph-mon[287604]: Removing monitor np0005538510 from monmap... Nov 28 04:51:53 localhost ceph-mon[287604]: Removing daemon mon.np0005538510 from np0005538510.localdomain -- ports [] Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538511,np0005538515,np0005538513 in quorum (ranks 0,1,2,4) Nov 28 04:51:53 localhost ceph-mon[287604]: Health check failed: 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538513 (MON_DOWN) Nov 28 04:51:53 localhost ceph-mon[287604]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538513 Nov 28 04:51:53 localhost ceph-mon[287604]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538513 Nov 28 04:51:53 localhost ceph-mon[287604]: mon.np0005538514 (rank 3) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum) Nov 28 04:51:53 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:54 localhost ceph-mon[287604]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:51:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:54 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:51:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:51:55 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054723 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:51:55 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:51:55 localhost ceph-mon[287604]: paxos.2).electionLogic(29) init, last seen epoch 29, mid-election, bumping Nov 28 04:51:55 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:55 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:55 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:55 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e7 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:51:56 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:51:56 localhost ceph-mon[287604]: Removed label mon from host np0005538510.localdomain Nov 28 04:51:56 localhost ceph-mon[287604]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:51:56 localhost ceph-mon[287604]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:51:56 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:51:56 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:51:56 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:51:56 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538511,np0005538515,np0005538514,np0005538513 in quorum (ranks 0,1,2,3,4) Nov 28 04:51:56 localhost ceph-mon[287604]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538513) Nov 28 04:51:56 localhost ceph-mon[287604]: Cluster is now healthy Nov 28 04:51:56 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:51:56 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:56 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:56 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:51:57 localhost openstack_network_exporter[240973]: ERROR 09:51:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:51:57 localhost openstack_network_exporter[240973]: ERROR 09:51:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:51:57 localhost openstack_network_exporter[240973]: ERROR 09:51:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:51:57 localhost openstack_network_exporter[240973]: ERROR 09:51:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:51:57 localhost openstack_network_exporter[240973]: Nov 28 04:51:57 localhost openstack_network_exporter[240973]: ERROR 09:51:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:51:57 localhost openstack_network_exporter[240973]: Nov 28 04:51:57 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:51:57 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:51:57 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:57 localhost ceph-mon[287604]: Removed label mgr from host np0005538510.localdomain Nov 28 04:51:57 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:57 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:57 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:51:58 localhost podman[239012]: time="2025-11-28T09:51:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:51:58 localhost podman[239012]: @ - - [28/Nov/2025:09:51:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:51:58 localhost podman[239012]: @ - - [28/Nov/2025:09:51:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19169 "" "Go-http-client/1.1" Nov 28 04:51:59 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:51:59 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:59 localhost ceph-mon[287604]: Removed label _admin from host np0005538510.localdomain Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:59 localhost ceph-mon[287604]: Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:51:59 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:51:59 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:51:59 localhost podman[289603]: Nov 28 04:51:59 localhost podman[289603]: 2025-11-28 09:51:59.729124842 +0000 UTC m=+0.086402160 container create 6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_goldwasser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, release=553, architecture=x86_64, maintainer=Guillaume Abrioux , ceph=True, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:51:59 localhost systemd[1]: Started libpod-conmon-6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d.scope. Nov 28 04:51:59 localhost podman[289603]: 2025-11-28 09:51:59.692830685 +0000 UTC m=+0.050108023 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:51:59 localhost systemd[1]: Started libcrun container. Nov 28 04:51:59 localhost podman[289603]: 2025-11-28 09:51:59.817275484 +0000 UTC m=+0.174552802 container init 6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_goldwasser, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, maintainer=Guillaume Abrioux , name=rhceph) Nov 28 04:51:59 localhost podman[289603]: 2025-11-28 09:51:59.831240074 +0000 UTC m=+0.188517392 container start 6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_goldwasser, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, ceph=True, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Nov 28 04:51:59 localhost podman[289603]: 2025-11-28 09:51:59.83209994 +0000 UTC m=+0.189377298 container attach 6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_goldwasser, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vcs-type=git) Nov 28 04:51:59 localhost quizzical_goldwasser[289617]: 167 167 Nov 28 04:51:59 localhost systemd[1]: libpod-6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d.scope: Deactivated successfully. Nov 28 04:51:59 localhost podman[289603]: 2025-11-28 09:51:59.835389511 +0000 UTC m=+0.192666839 container died 6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_goldwasser, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7) Nov 28 04:51:59 localhost podman[289622]: 2025-11-28 09:51:59.940961119 +0000 UTC m=+0.094363124 container remove 6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_goldwasser, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:51:59 localhost systemd[1]: libpod-conmon-6ddad776a79fd3c8179346e9a16362a66f623d999dc53229245b966913a6f32d.scope: Deactivated successfully. Nov 28 04:52:00 localhost ceph-mon[287604]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:52:00 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:52:00 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:00 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:00 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:52:00 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.620 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.622 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.623 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:52:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:52:00 localhost podman[289691]: Nov 28 04:52:00 localhost podman[289691]: 2025-11-28 09:52:00.656378799 +0000 UTC m=+0.077993020 container create 39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_grothendieck, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.openshift.expose-services=, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:52:00 localhost systemd[1]: Started libpod-conmon-39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b.scope. Nov 28 04:52:00 localhost systemd[1]: Started libcrun container. Nov 28 04:52:00 localhost podman[289691]: 2025-11-28 09:52:00.623397134 +0000 UTC m=+0.045011395 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:00 localhost podman[289691]: 2025-11-28 09:52:00.723095162 +0000 UTC m=+0.144709383 container init 39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_grothendieck, RELEASE=main, name=rhceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.33.12, distribution-scope=public, ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, release=553, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=) Nov 28 04:52:00 localhost systemd[1]: var-lib-containers-storage-overlay-263a15e22c3ffb0b59d878eed662e156aae2c04cb0ed982484a20193bc87d945-merged.mount: Deactivated successfully. Nov 28 04:52:00 localhost goofy_grothendieck[289705]: 167 167 Nov 28 04:52:00 localhost podman[289691]: 2025-11-28 09:52:00.739622411 +0000 UTC m=+0.161236632 container start 39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_grothendieck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, RELEASE=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, name=rhceph) Nov 28 04:52:00 localhost systemd[1]: libpod-39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b.scope: Deactivated successfully. Nov 28 04:52:00 localhost podman[289691]: 2025-11-28 09:52:00.740861298 +0000 UTC m=+0.162475519 container attach 39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_grothendieck, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Nov 28 04:52:00 localhost podman[289691]: 2025-11-28 09:52:00.743868671 +0000 UTC m=+0.165482962 container died 39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_grothendieck, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., release=553, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Nov 28 04:52:00 localhost systemd[1]: var-lib-containers-storage-overlay-0fc72b264f4585e4f3143871a4f11f78923c2fe522743deffb5c4d4086fbf1ae-merged.mount: Deactivated successfully. Nov 28 04:52:00 localhost podman[289710]: 2025-11-28 09:52:00.83938207 +0000 UTC m=+0.088531225 container remove 39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_grothendieck, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, maintainer=Guillaume Abrioux , version=7, name=rhceph, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:52:00 localhost systemd[1]: libpod-conmon-39732a853f2c7d6fb9059d31699247fd503158b67df963548a1b0548becb255b.scope: Deactivated successfully. Nov 28 04:52:01 localhost ceph-mon[287604]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:52:01 localhost ceph-mon[287604]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:52:01 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:01 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:01 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:52:01 localhost podman[289787]: Nov 28 04:52:01 localhost podman[289787]: 2025-11-28 09:52:01.64709377 +0000 UTC m=+0.080577551 container create bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_elgamal, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vcs-type=git, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553) Nov 28 04:52:01 localhost systemd[1]: Started libpod-conmon-bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5.scope. Nov 28 04:52:01 localhost systemd[1]: Started libcrun container. Nov 28 04:52:01 localhost podman[289787]: 2025-11-28 09:52:01.711381208 +0000 UTC m=+0.144864989 container init bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_elgamal, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, io.buildah.version=1.33.12) Nov 28 04:52:01 localhost podman[289787]: 2025-11-28 09:52:01.616720085 +0000 UTC m=+0.050203906 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:01 localhost podman[289787]: 2025-11-28 09:52:01.721203679 +0000 UTC m=+0.154687460 container start bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_elgamal, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:52:01 localhost vigilant_elgamal[289802]: 167 167 Nov 28 04:52:01 localhost systemd[1]: libpod-bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5.scope: Deactivated successfully. Nov 28 04:52:01 localhost podman[289787]: 2025-11-28 09:52:01.721589231 +0000 UTC m=+0.155073022 container attach bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_elgamal, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , version=7, CEPH_POINT_RELEASE=) Nov 28 04:52:01 localhost podman[289787]: 2025-11-28 09:52:01.729140594 +0000 UTC m=+0.162624425 container died bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_elgamal, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, io.buildah.version=1.33.12, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=553, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux ) Nov 28 04:52:01 localhost systemd[1]: var-lib-containers-storage-overlay-4a9db5debd21dd23f23ad88a68baded3c4635497648aba31d8707735a64b93a3-merged.mount: Deactivated successfully. Nov 28 04:52:01 localhost podman[289807]: 2025-11-28 09:52:01.827987364 +0000 UTC m=+0.095543920 container remove bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_elgamal, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:52:01 localhost systemd[1]: libpod-conmon-bc6687a5a9ab639ffe8d2ed7f4693b7aae602a8cfa238207a10564772d0110c5.scope: Deactivated successfully. Nov 28 04:52:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:52:02 localhost ceph-mon[287604]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:52:02 localhost ceph-mon[287604]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:52:02 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:02 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:02 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:52:02 localhost podman[289848]: 2025-11-28 09:52:02.293555419 +0000 UTC m=+0.136740608 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-type=git, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc.) Nov 28 04:52:02 localhost podman[289848]: 2025-11-28 09:52:02.311364166 +0000 UTC m=+0.154549415 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public) Nov 28 04:52:02 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:52:02 localhost podman[289903]: Nov 28 04:52:02 localhost podman[289903]: 2025-11-28 09:52:02.697983211 +0000 UTC m=+0.075922737 container create 865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_brattain, vcs-type=git, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., architecture=x86_64) Nov 28 04:52:02 localhost systemd[1]: Started libpod-conmon-865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f.scope. Nov 28 04:52:02 localhost systemd[1]: tmp-crun.B0bOVL.mount: Deactivated successfully. Nov 28 04:52:02 localhost systemd[1]: Started libcrun container. Nov 28 04:52:02 localhost podman[289903]: 2025-11-28 09:52:02.764644012 +0000 UTC m=+0.142583538 container init 865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_brattain, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:52:02 localhost podman[289903]: 2025-11-28 09:52:02.669200866 +0000 UTC m=+0.047140452 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:02 localhost podman[289903]: 2025-11-28 09:52:02.773522545 +0000 UTC m=+0.151462071 container start 865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_brattain, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.buildah.version=1.33.12, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, version=7) Nov 28 04:52:02 localhost mystifying_brattain[289918]: 167 167 Nov 28 04:52:02 localhost podman[289903]: 2025-11-28 09:52:02.775248138 +0000 UTC m=+0.153187714 container attach 865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_brattain, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, vcs-type=git, name=rhceph, version=7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:52:02 localhost systemd[1]: libpod-865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f.scope: Deactivated successfully. Nov 28 04:52:02 localhost podman[289903]: 2025-11-28 09:52:02.779818658 +0000 UTC m=+0.157758244 container died 865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_brattain, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, build-date=2025-09-24T08:57:55, release=553, GIT_BRANCH=main, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:52:02 localhost podman[289924]: 2025-11-28 09:52:02.873413889 +0000 UTC m=+0.085624166 container remove 865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=mystifying_brattain, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, vendor=Red Hat, Inc.) Nov 28 04:52:02 localhost systemd[1]: libpod-conmon-865f135ef39cae66b53c1eefb1b69e2c89d7e11139e5613376a460ebac81965f.scope: Deactivated successfully. Nov 28 04:52:03 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:52:03 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:52:03 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:03 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:03 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:03 localhost podman[289993]: Nov 28 04:52:03 localhost podman[289993]: 2025-11-28 09:52:03.592184201 +0000 UTC m=+0.074946957 container create f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bose, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, version=7, vcs-type=git) Nov 28 04:52:03 localhost systemd[1]: Started libpod-conmon-f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547.scope. Nov 28 04:52:03 localhost systemd[1]: Started libcrun container. Nov 28 04:52:03 localhost podman[289993]: 2025-11-28 09:52:03.651764494 +0000 UTC m=+0.134527250 container init f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bose, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux ) Nov 28 04:52:03 localhost podman[289993]: 2025-11-28 09:52:03.661152864 +0000 UTC m=+0.143915620 container start f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bose, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, release=553, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:52:03 localhost naughty_bose[290008]: 167 167 Nov 28 04:52:03 localhost podman[289993]: 2025-11-28 09:52:03.561758475 +0000 UTC m=+0.044521261 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:03 localhost podman[289993]: 2025-11-28 09:52:03.661437112 +0000 UTC m=+0.144199868 container attach f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bose, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, ceph=True, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:52:03 localhost systemd[1]: libpod-f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547.scope: Deactivated successfully. Nov 28 04:52:03 localhost podman[289993]: 2025-11-28 09:52:03.666679104 +0000 UTC m=+0.149441890 container died f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bose, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7) Nov 28 04:52:03 localhost systemd[1]: var-lib-containers-storage-overlay-179602c39f11d7560e4f6aeeed47cfb20ec067f15ea64266f61327d8d1bb4fc0-merged.mount: Deactivated successfully. Nov 28 04:52:03 localhost systemd[1]: var-lib-containers-storage-overlay-50e577bb76dafc4aceb6fb7c44cc2aa3e42ca33e42ed4189eeadd14dd72d400e-merged.mount: Deactivated successfully. Nov 28 04:52:03 localhost podman[290014]: 2025-11-28 09:52:03.756547159 +0000 UTC m=+0.080892980 container remove f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_bose, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, name=rhceph, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, GIT_BRANCH=main) Nov 28 04:52:03 localhost systemd[1]: libpod-conmon-f99acffce887091ada0b0b48efd749e2acf0a9b2f32ecb0cbc091c439c5fd547.scope: Deactivated successfully. Nov 28 04:52:04 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:52:04 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:52:04 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:04 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:04 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:52:04 localhost podman[290082]: Nov 28 04:52:04 localhost podman[290082]: 2025-11-28 09:52:04.474459645 +0000 UTC m=+0.081330203 container create 89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_jennings, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, name=rhceph, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:52:04 localhost systemd[1]: Started libpod-conmon-89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096.scope. Nov 28 04:52:04 localhost systemd[1]: Started libcrun container. Nov 28 04:52:04 localhost podman[290082]: 2025-11-28 09:52:04.4404917 +0000 UTC m=+0.047362298 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:04 localhost podman[290082]: 2025-11-28 09:52:04.544434078 +0000 UTC m=+0.151304646 container init 89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_jennings, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., ceph=True) Nov 28 04:52:04 localhost podman[290082]: 2025-11-28 09:52:04.552182497 +0000 UTC m=+0.159053055 container start 89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_jennings, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, name=rhceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, GIT_BRANCH=main) Nov 28 04:52:04 localhost podman[290082]: 2025-11-28 09:52:04.552435794 +0000 UTC m=+0.159306352 container attach 89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_jennings, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.buildah.version=1.33.12, name=rhceph, build-date=2025-09-24T08:57:55) Nov 28 04:52:04 localhost great_jennings[290098]: 167 167 Nov 28 04:52:04 localhost systemd[1]: libpod-89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096.scope: Deactivated successfully. Nov 28 04:52:04 localhost podman[290082]: 2025-11-28 09:52:04.557018236 +0000 UTC m=+0.163888794 container died 89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_jennings, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , vcs-type=git, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Nov 28 04:52:04 localhost podman[290103]: 2025-11-28 09:52:04.657143805 +0000 UTC m=+0.085123039 container remove 89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_jennings, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, vcs-type=git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Nov 28 04:52:04 localhost systemd[1]: libpod-conmon-89ded673970392884b8c2ed3020af8bc987ed7fbb85f96ba4415e62330009096.scope: Deactivated successfully. Nov 28 04:52:04 localhost systemd[1]: var-lib-containers-storage-overlay-0955258851a1cdebe2488657501f1ad6abbb78676e18f42475395c8d9af8e7d0-merged.mount: Deactivated successfully. Nov 28 04:52:05 localhost ceph-mon[287604]: Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:52:05 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:52:05 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:05 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:05 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:07 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:07 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:07 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:52:07 localhost ceph-mon[287604]: Removing np0005538510.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:07 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:07 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:07 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:07 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:07 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:07 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:07 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: Removing np0005538510.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:52:08 localhost ceph-mon[287604]: Removing np0005538510.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:52:08 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:08 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:08 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:08 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:08 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:08 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:09 localhost ceph-mon[287604]: Removing daemon mgr.np0005538510.nzitwz from np0005538510.localdomain -- ports [9283, 8765] Nov 28 04:52:10 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:10 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:10 localhost ceph-mon[287604]: Added label _no_schedule to host np0005538510.localdomain Nov 28 04:52:10 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:10 localhost ceph-mon[287604]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005538510.localdomain Nov 28 04:52:10 localhost ceph-mon[287604]: Removing key for mgr.np0005538510.nzitwz Nov 28 04:52:10 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth rm", "entity": "mgr.np0005538510.nzitwz"} : dispatch Nov 28 04:52:10 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005538510.nzitwz"}]': finished Nov 28 04:52:10 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:10 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:12 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:12 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:12 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:52:12 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:13 localhost ceph-mon[287604]: Removing daemon crash.np0005538510 from np0005538510.localdomain -- ports [] Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain"} : dispatch Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain"}]': finished Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth rm", "entity": "client.crash.np0005538510.localdomain"} : dispatch Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005538510.localdomain"}]': finished Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:52:13 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:52:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:52:13 localhost podman[290493]: 2025-11-28 09:52:13.376008259 +0000 UTC m=+0.076293569 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:52:13 localhost podman[290495]: 2025-11-28 09:52:13.446131026 +0000 UTC m=+0.137639395 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:52:13 localhost podman[290494]: 2025-11-28 09:52:13.402161673 +0000 UTC m=+0.096422187 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:52:13 localhost podman[290493]: 2025-11-28 09:52:13.470426733 +0000 UTC m=+0.170712073 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:52:13 localhost podman[290495]: 2025-11-28 09:52:13.478326926 +0000 UTC m=+0.169835315 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:52:13 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:52:13 localhost podman[290494]: 2025-11-28 09:52:13.485365853 +0000 UTC m=+0.179626287 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 04:52:13 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:52:13 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:52:13 localhost podman[290501]: 2025-11-28 09:52:13.436684415 +0000 UTC m=+0.125499592 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:52:13 localhost podman[290501]: 2025-11-28 09:52:13.568717037 +0000 UTC m=+0.257532284 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:52:13 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:52:14 localhost ceph-mon[287604]: Removing key for client.crash.np0005538510.localdomain Nov 28 04:52:14 localhost ceph-mon[287604]: Removed host np0005538510.localdomain Nov 28 04:52:14 localhost ceph-mon[287604]: Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:52:14 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:14 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:52:14 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:14 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:14 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.435873) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323534436166, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12689, "num_deletes": 779, "total_data_size": 20222003, "memory_usage": 20826280, "flush_reason": "Manual Compaction"} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323534545346, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12286694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12694, "table_properties": {"data_size": 12229999, "index_size": 29773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25861, "raw_key_size": 267511, "raw_average_key_size": 25, "raw_value_size": 12056265, "raw_average_value_size": 1167, "num_data_blocks": 1136, "num_entries": 10325, "num_filter_entries": 10325, "num_deletions": 778, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 1764323465, "file_creation_time": 1764323534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 109583 microseconds, and 33495 cpu microseconds. Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.545480) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12286694 bytes OK Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.545538) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.547353) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.547403) EVENT_LOG_v1 {"time_micros": 1764323534547397, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.547426) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20139157, prev total WAL file size 20139906, number of live WAL files 2. Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.550900) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373631' seq:0, type:0; will stop at (end) Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1887B)] Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323534550993, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12288581, "oldest_snapshot_seqno": -1} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9550 keys, 12274725 bytes, temperature: kUnknown Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323534648175, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12274725, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12220028, "index_size": 29700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 254014, "raw_average_key_size": 26, "raw_value_size": 12056323, "raw_average_value_size": 1262, "num_data_blocks": 1134, "num_entries": 9550, "num_filter_entries": 9550, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323534, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.648468) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12274725 bytes Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.650241) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.3 rd, 126.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.7, 0.0 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10330, records dropped: 780 output_compression: NoCompression Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.650268) EVENT_LOG_v1 {"time_micros": 1764323534650257, "job": 4, "event": "compaction_finished", "compaction_time_micros": 97266, "compaction_time_cpu_micros": 35352, "output_level": 6, "num_output_files": 1, "total_output_size": 12274725, "num_input_records": 10330, "num_output_records": 9550, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323534652202, "job": 4, "event": "table_file_deletion", "file_number": 14} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323534652251, "job": 4, "event": "table_file_deletion", "file_number": 8} Nov 28 04:52:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:14.550804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.209788) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323535209836, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 291, "num_deletes": 251, "total_data_size": 90618, "memory_usage": 96504, "flush_reason": "Manual Compaction"} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323535212993, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 58979, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12699, "largest_seqno": 12985, "table_properties": {"data_size": 57000, "index_size": 218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5345, "raw_average_key_size": 19, "raw_value_size": 53076, "raw_average_value_size": 194, "num_data_blocks": 8, "num_entries": 273, "num_filter_entries": 273, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323534, "oldest_key_time": 1764323534, "file_creation_time": 1764323535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 3245 microseconds, and 913 cpu microseconds. Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.213033) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 58979 bytes OK Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.213053) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.214851) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.214876) EVENT_LOG_v1 {"time_micros": 1764323535214870, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.214897) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 88450, prev total WAL file size 88450, number of live WAL files 2. Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.215416) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end) Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(57KB)], [15(11MB)] Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323535215503, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12333704, "oldest_snapshot_seqno": -1} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9307 keys, 11191258 bytes, temperature: kUnknown Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323535309062, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11191258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11139798, "index_size": 27103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23301, "raw_key_size": 249390, "raw_average_key_size": 26, "raw_value_size": 10981875, "raw_average_value_size": 1179, "num_data_blocks": 1021, "num_entries": 9307, "num_filter_entries": 9307, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.309496) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11191258 bytes Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.311623) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.6 rd, 119.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.7 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(398.9) write-amplify(189.7) OK, records in: 9823, records dropped: 516 output_compression: NoCompression Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.311657) EVENT_LOG_v1 {"time_micros": 1764323535311644, "job": 6, "event": "compaction_finished", "compaction_time_micros": 93747, "compaction_time_cpu_micros": 37380, "output_level": 6, "num_output_files": 1, "total_output_size": 11191258, "num_input_records": 9823, "num_output_records": 9307, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323535311971, "job": 6, "event": "table_file_deletion", "file_number": 17} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323535313969, "job": 6, "event": "table_file_deletion", "file_number": 15} Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.215281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.314193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.314204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.314206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.314208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:52:15.314210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:52:15 localhost ceph-mon[287604]: Reconfiguring mon.np0005538511 (monmap changed)... Nov 28 04:52:15 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538511 on np0005538511.localdomain Nov 28 04:52:15 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:15 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:15 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:15 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:16 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:52:16 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:52:16 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:16 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:16 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:52:17 localhost ceph-mon[287604]: Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:52:17 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:52:17 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:17 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:17 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:52:17 localhost podman[290576]: 2025-11-28 09:52:17.974786913 +0000 UTC m=+0.078661561 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:52:17 localhost podman[290576]: 2025-11-28 09:52:17.983976226 +0000 UTC m=+0.087850824 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:52:17 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:52:18 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:52:18 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:52:18 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:18 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:18 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:18 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:52:18 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:18 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:52:19 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:19 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:19 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:52:19 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:19 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:52:19 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:20 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:20 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:20 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:52:20 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:52:20 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:52:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:52:20 localhost podman[290599]: 2025-11-28 09:52:20.957227911 +0000 UTC m=+0.067137836 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 04:52:20 localhost podman[290599]: 2025-11-28 09:52:20.96597083 +0000 UTC m=+0.075880765 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:52:20 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:52:21 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:21 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:21 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:52:21 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:52:21 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:52:21 localhost ceph-mon[287604]: Saving service mon spec with placement label:mon Nov 28 04:52:21 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:22 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:22 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:22 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:52:22 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:52:22 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:52:22 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:22 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:22 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:23 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c51e0 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Nov 28 04:52:23 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:52:23 localhost ceph-mon[287604]: paxos.2).electionLogic(32) init, last seen epoch 32 Nov 28 04:52:23 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:23 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:23 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:24 localhost ceph-mon[287604]: Remove daemons mon.np0005538513 Nov 28 04:52:24 localhost ceph-mon[287604]: Safe to remove mon.np0005538513: new quorum should be ['np0005538512', 'np0005538511', 'np0005538515', 'np0005538514'] (from ['np0005538512', 'np0005538511', 'np0005538515', 'np0005538514']) Nov 28 04:52:24 localhost ceph-mon[287604]: Removing monitor np0005538513 from monmap... Nov 28 04:52:24 localhost ceph-mon[287604]: Removing daemon mon.np0005538513 from np0005538513.localdomain -- ports [] Nov 28 04:52:24 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:52:24 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:52:24 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:52:24 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:52:24 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538511,np0005538515,np0005538514 in quorum (ranks 0,1,2,3) Nov 28 04:52:24 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:24 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:52:24 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:24 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:24 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:24 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:24 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:25 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:26 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:26 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:26 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:26 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:26 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:26 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:27 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:27 localhost openstack_network_exporter[240973]: ERROR 09:52:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:52:27 localhost openstack_network_exporter[240973]: ERROR 09:52:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:52:27 localhost openstack_network_exporter[240973]: ERROR 09:52:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:52:27 localhost openstack_network_exporter[240973]: ERROR 09:52:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:52:27 localhost openstack_network_exporter[240973]: Nov 28 04:52:27 localhost openstack_network_exporter[240973]: ERROR 09:52:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:52:27 localhost openstack_network_exporter[240973]: Nov 28 04:52:28 localhost nova_compute[280168]: 2025-11-28 09:52:28.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:28 localhost nova_compute[280168]: 2025-11-28 09:52:28.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:28 localhost ceph-mon[287604]: Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:52:28 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:28 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:28 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:28 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:28 localhost podman[239012]: time="2025-11-28T09:52:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:52:28 localhost podman[239012]: @ - - [28/Nov/2025:09:52:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:52:28 localhost podman[239012]: @ - - [28/Nov/2025:09:52:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19164 "" "Go-http-client/1.1" Nov 28 04:52:29 localhost nova_compute[280168]: 2025-11-28 09:52:29.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:29 localhost nova_compute[280168]: 2025-11-28 09:52:29.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:52:29 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:52:29 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:52:29 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:29 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:29 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:30 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:52:30 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:52:30 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:30 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:30 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:30 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:31 localhost nova_compute[280168]: 2025-11-28 09:52:31.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:31 localhost nova_compute[280168]: 2025-11-28 09:52:31.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:52:31 localhost nova_compute[280168]: 2025-11-28 09:52:31.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:52:31 localhost nova_compute[280168]: 2025-11-28 09:52:31.262 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:52:31 localhost nova_compute[280168]: 2025-11-28 09:52:31.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:31 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:52:31 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:52:31 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:31 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:31 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:52:32 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:52:32 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:52:32 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:32 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:32 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:52:32 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:52:32 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:52:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:52:32 localhost systemd[1]: tmp-crun.D2uw0o.mount: Deactivated successfully. Nov 28 04:52:32 localhost podman[291013]: 2025-11-28 09:52:32.976239133 +0000 UTC m=+0.084390947 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal) Nov 28 04:52:33 localhost podman[291013]: 2025-11-28 09:52:33.020512795 +0000 UTC m=+0.128664629 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, architecture=x86_64, distribution-scope=public, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:52:33 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:33 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:52:33 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1522556299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.438 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.439 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.439 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.439 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.440 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:52:33 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:52:33 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3135871719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:52:33 localhost nova_compute[280168]: 2025-11-28 09:52:33.926 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.145 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.147 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12056MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.147 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.148 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:52:34 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:34 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:34 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:52:34 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:52:34 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:52:34 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:34 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:34 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.246 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.246 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.279 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.778 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.784 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.815 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.818 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:52:34 localhost nova_compute[280168]: 2025-11-28 09:52:34.818 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:52:35 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:52:35 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:52:35 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:35 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:35 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:35 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:52:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2401285035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:52:35 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:36 localhost ceph-mon[287604]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:52:36 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:52:36 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:36 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:36 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:52:36 localhost nova_compute[280168]: 2025-11-28 09:52:36.819 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:52:37 localhost ceph-mon[287604]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:52:37 localhost ceph-mon[287604]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:52:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:52:37 localhost ceph-mon[287604]: Deploying daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:52:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:37 localhost ceph-mon[287604]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:52:37 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:52:37 localhost ceph-mon[287604]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:52:39 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:39 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:39 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:52:39 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:52:39 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:52:39 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:39 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:39 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:39 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Nov 28 04:52:39 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Nov 28 04:52:39 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e8 adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints Nov 28 04:52:39 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5646e12c4f20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0 Nov 28 04:52:39 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:52:39 localhost ceph-mon[287604]: paxos.2).electionLogic(34) init, last seen epoch 34 Nov 28 04:52:39 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:39 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:39 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:52:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:52:43 localhost systemd[1]: tmp-crun.wM0KEP.mount: Deactivated successfully. Nov 28 04:52:43 localhost podman[291078]: 2025-11-28 09:52:43.661689272 +0000 UTC m=+0.085809071 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:52:43 localhost podman[291077]: 2025-11-28 09:52:43.726764234 +0000 UTC m=+0.150034368 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 04:52:43 localhost podman[291079]: 2025-11-28 09:52:43.771557072 +0000 UTC m=+0.192744601 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Nov 28 04:52:43 localhost podman[291079]: 2025-11-28 09:52:43.778423513 +0000 UTC m=+0.199611062 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 04:52:43 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:52:43 localhost podman[291077]: 2025-11-28 09:52:43.792395232 +0000 UTC m=+0.215665306 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Nov 28 04:52:43 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:52:43 localhost podman[291104]: 2025-11-28 09:52:43.692254622 +0000 UTC m=+0.080700584 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:52:43 localhost podman[291078]: 2025-11-28 09:52:43.847369264 +0000 UTC m=+0.271489113 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125) Nov 28 04:52:43 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:52:43 localhost podman[291104]: 2025-11-28 09:52:43.876523381 +0000 UTC m=+0.264969353 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:52:43 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:52:44 localhost systemd[1]: tmp-crun.dT0Ctr.mount: Deactivated successfully. Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538511,np0005538515,np0005538514 in quorum (ranks 0,1,2,3) Nov 28 04:52:44 localhost ceph-mon[287604]: Health check failed: 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538514 (MON_DOWN) Nov 28 04:52:44 localhost ceph-mon[287604]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538514 Nov 28 04:52:44 localhost ceph-mon[287604]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538514 Nov 28 04:52:44 localhost ceph-mon[287604]: mon.np0005538513 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum) Nov 28 04:52:44 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:44 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:44 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:52:45 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:45 localhost podman[291217]: Nov 28 04:52:45 localhost podman[291217]: 2025-11-28 09:52:45.52897265 +0000 UTC m=+0.070571652 container create 93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_clarke, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, version=7, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , release=553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main) Nov 28 04:52:45 localhost systemd[1]: Started libpod-conmon-93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830.scope. Nov 28 04:52:45 localhost systemd[1]: Started libcrun container. Nov 28 04:52:45 localhost podman[291217]: 2025-11-28 09:52:45.600036467 +0000 UTC m=+0.141635429 container init 93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_clarke, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, release=553, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:52:45 localhost podman[291217]: 2025-11-28 09:52:45.502114304 +0000 UTC m=+0.043713276 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:45 localhost podman[291217]: 2025-11-28 09:52:45.613864791 +0000 UTC m=+0.155463753 container start 93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_clarke, version=7, RELEASE=main, ceph=True, vendor=Red Hat, Inc., architecture=x86_64, release=553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:52:45 localhost podman[291217]: 2025-11-28 09:52:45.614299326 +0000 UTC m=+0.155898338 container attach 93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_clarke, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, io.buildah.version=1.33.12, com.redhat.component=rhceph-container) Nov 28 04:52:45 localhost modest_clarke[291232]: 167 167 Nov 28 04:52:45 localhost systemd[1]: libpod-93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830.scope: Deactivated successfully. Nov 28 04:52:45 localhost podman[291217]: 2025-11-28 09:52:45.61836636 +0000 UTC m=+0.159965352 container died 93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_clarke, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, name=rhceph) Nov 28 04:52:45 localhost systemd[1]: var-lib-containers-storage-overlay-d4e1b993447ac98af278f889cbe1b53e7d6f874ea92960d3d7adbaaef536a55c-merged.mount: Deactivated successfully. Nov 28 04:52:45 localhost podman[291237]: 2025-11-28 09:52:45.71617889 +0000 UTC m=+0.087825003 container remove 93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_clarke, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, GIT_CLEAN=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph) Nov 28 04:52:45 localhost systemd[1]: libpod-conmon-93e8b5648c77f2f6bb54601f3934c570f39dc5c853de50baf3cdecb973c84830.scope: Deactivated successfully. Nov 28 04:52:45 localhost ceph-mon[287604]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:52:45 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:52:45 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:45 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:45 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:52:46 localhost podman[291306]: Nov 28 04:52:46 localhost podman[291306]: 2025-11-28 09:52:46.429997931 +0000 UTC m=+0.084539722 container create 8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_mendel, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:52:46 localhost systemd[1]: Started libpod-conmon-8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b.scope. Nov 28 04:52:46 localhost systemd[1]: Started libcrun container. Nov 28 04:52:46 localhost podman[291306]: 2025-11-28 09:52:46.397727098 +0000 UTC m=+0.052268909 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:46 localhost podman[291306]: 2025-11-28 09:52:46.500705296 +0000 UTC m=+0.155247087 container init 8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_mendel, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, release=553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:52:46 localhost podman[291306]: 2025-11-28 09:52:46.510820958 +0000 UTC m=+0.165362739 container start 8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_mendel, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:52:46 localhost podman[291306]: 2025-11-28 09:52:46.511132517 +0000 UTC m=+0.165674298 container attach 8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_mendel, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.33.12, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, ceph=True) Nov 28 04:52:46 localhost nice_mendel[291321]: 167 167 Nov 28 04:52:46 localhost systemd[1]: libpod-8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b.scope: Deactivated successfully. Nov 28 04:52:46 localhost podman[291306]: 2025-11-28 09:52:46.514183471 +0000 UTC m=+0.168725282 container died 8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_mendel, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-type=git, ceph=True, release=553, name=rhceph, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:52:46 localhost podman[291326]: 2025-11-28 09:52:46.626983601 +0000 UTC m=+0.097510801 container remove 8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_mendel, name=rhceph, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55) Nov 28 04:52:46 localhost systemd[1]: libpod-conmon-8c7f960d9b896964b2dc567d7ea8a87fad3d67721751b174a11b222c32a80c4b.scope: Deactivated successfully. Nov 28 04:52:46 localhost systemd[1]: var-lib-containers-storage-overlay-3727b37edf48995ccfcc87ef5d2dc906625fd4c2baa0b3ae8d41eec1a78e6092-merged.mount: Deactivated successfully. Nov 28 04:52:46 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:52:46 localhost ceph-mon[287604]: paxos.2).electionLogic(36) init, last seen epoch 36 Nov 28 04:52:46 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:46 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:46 localhost ceph-mon[287604]: mon.np0005538515@2(electing) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:46 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e9 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:52:47 localhost podman[291399]: Nov 28 04:52:47 localhost podman[291399]: 2025-11-28 09:52:47.54811181 +0000 UTC m=+0.082955033 container create fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_stonebraker, maintainer=Guillaume Abrioux , version=7, io.buildah.version=1.33.12, architecture=x86_64, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, io.openshift.tags=rhceph ceph) Nov 28 04:52:47 localhost systemd[1]: Started libpod-conmon-fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748.scope. Nov 28 04:52:47 localhost systemd[1]: Started libcrun container. Nov 28 04:52:47 localhost podman[291399]: 2025-11-28 09:52:47.611766659 +0000 UTC m=+0.146609842 container init fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_stonebraker, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, vendor=Red Hat, Inc.) Nov 28 04:52:47 localhost podman[291399]: 2025-11-28 09:52:47.518407926 +0000 UTC m=+0.053251109 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:47 localhost kind_stonebraker[291414]: 167 167 Nov 28 04:52:47 localhost podman[291399]: 2025-11-28 09:52:47.624648195 +0000 UTC m=+0.159491378 container start fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_stonebraker, name=rhceph, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:52:47 localhost systemd[1]: libpod-fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748.scope: Deactivated successfully. Nov 28 04:52:47 localhost podman[291399]: 2025-11-28 09:52:47.624997095 +0000 UTC m=+0.159840288 container attach fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_stonebraker, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, build-date=2025-09-24T08:57:55, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True) Nov 28 04:52:47 localhost podman[291399]: 2025-11-28 09:52:47.628041839 +0000 UTC m=+0.162885042 container died fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_stonebraker, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, version=7, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, io.buildah.version=1.33.12, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55) Nov 28 04:52:47 localhost systemd[1]: var-lib-containers-storage-overlay-5b7dcc9e32d04e73258d034ae877554174af6478f92d6d4ccde03635c3390142-merged.mount: Deactivated successfully. Nov 28 04:52:47 localhost podman[291419]: 2025-11-28 09:52:47.731535173 +0000 UTC m=+0.095427547 container remove fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_stonebraker, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-09-24T08:57:55, release=553, description=Red Hat Ceph Storage 7, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:52:47 localhost systemd[1]: libpod-conmon-fbac79e01f3623ff3927a4fb79200e5236748ad9579c02b82dbeef56fe62c748.scope: Deactivated successfully. Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538511 calling monitor election Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:52:47 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538511,np0005538515,np0005538514,np0005538513 in quorum (ranks 0,1,2,3,4) Nov 28 04:52:47 localhost ceph-mon[287604]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005538512,np0005538511,np0005538515,np0005538514) Nov 28 04:52:47 localhost ceph-mon[287604]: Cluster is now healthy Nov 28 04:52:47 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:52:47 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:47 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:47 localhost ceph-mon[287604]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:52:47 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:52:47 localhost ceph-mon[287604]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:52:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:52:48 localhost podman[291460]: 2025-11-28 09:52:48.215266256 +0000 UTC m=+0.108636924 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:52:48 localhost podman[291460]: 2025-11-28 09:52:48.25213523 +0000 UTC m=+0.145505918 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:52:48 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:52:48 localhost podman[291518]: Nov 28 04:52:48 localhost podman[291518]: 2025-11-28 09:52:48.655547961 +0000 UTC m=+0.086036967 container create ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_banzai, version=7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, name=rhceph) Nov 28 04:52:48 localhost systemd[1]: Started libpod-conmon-ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7.scope. Nov 28 04:52:48 localhost podman[291518]: 2025-11-28 09:52:48.618977816 +0000 UTC m=+0.049466822 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:48 localhost systemd[1]: Started libcrun container. Nov 28 04:52:48 localhost podman[291518]: 2025-11-28 09:52:48.73968785 +0000 UTC m=+0.170176846 container init ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_banzai, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, io.buildah.version=1.33.12, architecture=x86_64, ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:52:48 localhost podman[291518]: 2025-11-28 09:52:48.749855153 +0000 UTC m=+0.180344149 container start ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_banzai, vcs-type=git, RELEASE=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, release=553, description=Red Hat Ceph Storage 7, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_BRANCH=main) Nov 28 04:52:48 localhost podman[291518]: 2025-11-28 09:52:48.750204473 +0000 UTC m=+0.180693469 container attach ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_banzai, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_BRANCH=main) Nov 28 04:52:48 localhost intelligent_banzai[291534]: 167 167 Nov 28 04:52:48 localhost systemd[1]: libpod-ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7.scope: Deactivated successfully. Nov 28 04:52:48 localhost podman[291541]: 2025-11-28 09:52:48.82516366 +0000 UTC m=+0.055346614 container died ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_banzai, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , name=rhceph, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:52:48 localhost podman[291541]: 2025-11-28 09:52:48.872931339 +0000 UTC m=+0.103114253 container remove ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_banzai, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, release=553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph) Nov 28 04:52:48 localhost systemd[1]: libpod-conmon-ad9bf77c17751781700c0c0d48a17139af47b781b2e53c1573f0a34feb3019d7.scope: Deactivated successfully. Nov 28 04:52:48 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:48 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:48 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:52:48 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:52:48 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:52:49 localhost podman[291610]: Nov 28 04:52:49 localhost podman[291610]: 2025-11-28 09:52:49.642870187 +0000 UTC m=+0.077525936 container create 43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, RELEASE=main, ceph=True) Nov 28 04:52:49 localhost systemd[1]: var-lib-containers-storage-overlay-9bd864515b5a1f8b47ff2c371c21eef8657baf75a14d22d84d8d5354b9dae190-merged.mount: Deactivated successfully. Nov 28 04:52:49 localhost systemd[1]: Started libpod-conmon-43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95.scope. Nov 28 04:52:49 localhost systemd[1]: Started libcrun container. Nov 28 04:52:49 localhost podman[291610]: 2025-11-28 09:52:49.612004968 +0000 UTC m=+0.046660717 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:52:49 localhost podman[291610]: 2025-11-28 09:52:49.721053873 +0000 UTC m=+0.155709612 container init 43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, vcs-type=git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_CLEAN=True) Nov 28 04:52:49 localhost podman[291610]: 2025-11-28 09:52:49.731032649 +0000 UTC m=+0.165688468 container start 43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, build-date=2025-09-24T08:57:55, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:52:49 localhost podman[291610]: 2025-11-28 09:52:49.731473373 +0000 UTC m=+0.166129152 container attach 43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=) Nov 28 04:52:49 localhost busy_mahavira[291625]: 167 167 Nov 28 04:52:49 localhost systemd[1]: libpod-43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95.scope: Deactivated successfully. Nov 28 04:52:49 localhost podman[291610]: 2025-11-28 09:52:49.734164355 +0000 UTC m=+0.168820134 container died 43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:52:49 localhost podman[291630]: 2025-11-28 09:52:49.825566948 +0000 UTC m=+0.080332853 container remove 43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_mahavira, ceph=True, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, version=7) Nov 28 04:52:49 localhost systemd[1]: libpod-conmon-43fefb26711867a1d278c7efa6e7f0da448b05c281fce655805ccb0d2997dc95.scope: Deactivated successfully. Nov 28 04:52:50 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:50 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:50 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:52:50 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:52:50 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:52:50 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:50 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:50 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:50 localhost systemd[1]: var-lib-containers-storage-overlay-4be03bd20a4323a9ca325359fcb32a3eb720231157e3fc56c0f99faba10a4ce0-merged.mount: Deactivated successfully. Nov 28 04:52:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:52:50.835 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:52:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:52:50.837 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:52:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:52:50.837 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:52:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:52:51 localhost systemd[1]: tmp-crun.gmsfX9.mount: Deactivated successfully. Nov 28 04:52:52 localhost podman[291715]: 2025-11-28 09:52:51.999975505 +0000 UTC m=+0.101197674 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:52:52 localhost podman[291715]: 2025-11-28 09:52:52.036723115 +0000 UTC m=+0.137945244 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:52:52 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:52 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:52:53 localhost ceph-mon[287604]: Reconfig service osd.default_drive_group Nov 28 04:52:53 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:53 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:53 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:53 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:53 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:54 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e87 e87: 6 total, 6 up, 6 in Nov 28 04:52:54 localhost systemd[1]: session-64.scope: Deactivated successfully. Nov 28 04:52:54 localhost systemd[1]: session-64.scope: Consumed 20.319s CPU time. Nov 28 04:52:54 localhost systemd-logind[763]: Session 64 logged out. Waiting for processes to exit. Nov 28 04:52:54 localhost systemd-logind[763]: Removed session 64. Nov 28 04:52:54 localhost sshd[292054]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:52:54 localhost systemd-logind[763]: New session 65 of user ceph-admin. Nov 28 04:52:54 localhost systemd[1]: Started Session 65 of User ceph-admin. Nov 28 04:52:54 localhost ceph-mon[287604]: from='client.? 172.18.0.200:0/1216930330' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: Activating manager daemon np0005538514.djozup Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:54 localhost ceph-mon[287604]: from='client.? 172.18.0.200:0/1216930330' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.14190 172.18.0.105:0/2825788323' entity='mgr.np0005538512.zyhkxs' Nov 28 04:52:54 localhost ceph-mon[287604]: Manager daemon np0005538514.djozup is now available Nov 28 04:52:54 localhost ceph-mon[287604]: removing stray HostCache host record np0005538510.localdomain.devices.0 Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain.devices.0"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain.devices.0"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain.devices.0"}]': finished Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain.devices.0"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain.devices.0"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538510.localdomain.devices.0"}]': finished Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/mirror_snapshot_schedule"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/mirror_snapshot_schedule"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/trash_purge_schedule"} : dispatch Nov 28 04:52:54 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/trash_purge_schedule"} : dispatch Nov 28 04:52:55 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:52:55 localhost podman[292163]: 2025-11-28 09:52:55.706786247 +0000 UTC m=+0.090587037 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, release=553, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Nov 28 04:52:55 localhost podman[292163]: 2025-11-28 09:52:55.812229642 +0000 UTC m=+0.196030412 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:52:56 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:56 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:56 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: [28/Nov/2025:09:52:55] ENGINE Bus STARTING Nov 28 04:52:57 localhost ceph-mon[287604]: [28/Nov/2025:09:52:56] ENGINE Serving on http://172.18.0.107:8765 Nov 28 04:52:57 localhost ceph-mon[287604]: [28/Nov/2025:09:52:56] ENGINE Serving on https://172.18.0.107:7150 Nov 28 04:52:57 localhost ceph-mon[287604]: [28/Nov/2025:09:52:56] ENGINE Bus STARTED Nov 28 04:52:57 localhost ceph-mon[287604]: [28/Nov/2025:09:52:56] ENGINE Client ('172.18.0.107', 40776) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:57 localhost openstack_network_exporter[240973]: ERROR 09:52:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:52:57 localhost openstack_network_exporter[240973]: ERROR 09:52:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:52:57 localhost openstack_network_exporter[240973]: ERROR 09:52:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:52:57 localhost openstack_network_exporter[240973]: ERROR 09:52:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:52:57 localhost openstack_network_exporter[240973]: Nov 28 04:52:57 localhost openstack_network_exporter[240973]: ERROR 09:52:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:52:57 localhost openstack_network_exporter[240973]: Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:52:58 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:52:58 localhost podman[239012]: time="2025-11-28T09:52:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:52:58 localhost podman[239012]: @ - - [28/Nov/2025:09:52:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:52:58 localhost podman[239012]: @ - - [28/Nov/2025:09:52:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19161 "" "Go-http-client/1.1" Nov 28 04:52:59 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:52:59 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:52:59 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:52:59 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:52:59 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:52:59 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:52:59 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:59 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:59 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:59 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:52:59 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:00 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:00 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:01 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:01 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:01 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:01 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:01 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:01 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:01 localhost ceph-mon[287604]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Nov 28 04:53:01 localhost ceph-mon[287604]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Nov 28 04:53:02 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:02 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:03 localhost ceph-mon[287604]: Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:53:03 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:03 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:53:03 localhost systemd[1]: tmp-crun.GoUW4O.mount: Deactivated successfully. Nov 28 04:53:04 localhost podman[293060]: 2025-11-28 09:53:04.003796041 +0000 UTC m=+0.099153781 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9) Nov 28 04:53:04 localhost podman[293060]: 2025-11-28 09:53:04.023745574 +0000 UTC m=+0.119103384 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 04:53:04 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:53:04 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:53:04 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:53:04 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:53:04 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:53:04 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:04 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:04 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:04 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:04 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:05 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:53:05 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:53:05 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:05 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:05 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:53:05 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:05 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:05 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:53:05 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:07 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:07 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:07 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:53:07 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:53:07 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:53:08 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:08 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:08 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:08 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:08 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:53:08 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:53:08 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:53:08 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:09 localhost ceph-mon[287604]: Saving service mon spec with placement label:mon Nov 28 04:53:09 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:09 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:09 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:09 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:09 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:09 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:10 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:53:10 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:53:10 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:10 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:10 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:10 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:10 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:11 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:53:11 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:53:11 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:11 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:11 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:12 localhost ceph-mon[287604]: Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:53:12 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:53:12 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:12 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:12 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:12 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:13 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e9 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 04:53:13 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3388432170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 04:53:13 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e9 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 04:53:13 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3388432170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 04:53:13 localhost ceph-mon[287604]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:53:13 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:53:13 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:13 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:13 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.487662) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323593487701, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2879, "num_deletes": 254, "total_data_size": 9409171, "memory_usage": 9832624, "flush_reason": "Manual Compaction"} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323593536521, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 5653023, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12990, "largest_seqno": 15864, "table_properties": {"data_size": 5640920, "index_size": 7584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31389, "raw_average_key_size": 22, "raw_value_size": 5614350, "raw_average_value_size": 4086, "num_data_blocks": 327, "num_entries": 1374, "num_filter_entries": 1374, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323535, "oldest_key_time": 1764323535, "file_creation_time": 1764323593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 48916 microseconds, and 11224 cpu microseconds. Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.536574) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 5653023 bytes OK Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.536598) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.541594) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.541645) EVENT_LOG_v1 {"time_micros": 1764323593541635, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.541667) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 9395148, prev total WAL file size 9395148, number of live WAL files 2. Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.543645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end) Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(5520KB)], [18(10MB)] Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323593543730, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 16844281, "oldest_snapshot_seqno": -1} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 10132 keys, 14814666 bytes, temperature: kUnknown Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323593658638, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14814666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14756604, "index_size": 31657, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 269423, "raw_average_key_size": 26, "raw_value_size": 14583304, "raw_average_value_size": 1439, "num_data_blocks": 1215, "num_entries": 10132, "num_filter_entries": 10132, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.659206) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14814666 bytes Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.661105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.4 rd, 128.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.4, 10.7 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(5.6) write-amplify(2.6) OK, records in: 10681, records dropped: 549 output_compression: NoCompression Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.661138) EVENT_LOG_v1 {"time_micros": 1764323593661125, "job": 8, "event": "compaction_finished", "compaction_time_micros": 115024, "compaction_time_cpu_micros": 41806, "output_level": 6, "num_output_files": 1, "total_output_size": 14814666, "num_input_records": 10681, "num_output_records": 10132, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323593662159, "job": 8, "event": "table_file_deletion", "file_number": 20} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323593663897, "job": 8, "event": "table_file_deletion", "file_number": 18} Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.543304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.664043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.664051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.664055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.664058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:13 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:13.664061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:53:13 localhost podman[293081]: 2025-11-28 09:53:13.980226094 +0000 UTC m=+0.082539200 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller) Nov 28 04:53:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:53:14 localhost podman[293080]: 2025-11-28 09:53:14.03240382 +0000 UTC m=+0.136730968 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 04:53:14 localhost podman[293081]: 2025-11-28 09:53:14.043388268 +0000 UTC m=+0.145701344 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:53:14 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:53:14 localhost podman[293080]: 2025-11-28 09:53:14.094189501 +0000 UTC m=+0.198516639 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2) Nov 28 04:53:14 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:53:14 localhost podman[293082]: 2025-11-28 09:53:14.189688369 +0000 UTC m=+0.288921740 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible) Nov 28 04:53:14 localhost podman[293082]: 2025-11-28 09:53:14.219598509 +0000 UTC m=+0.318831850 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:53:14 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:53:14 localhost podman[293121]: 2025-11-28 09:53:14.23850964 +0000 UTC m=+0.232448162 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:53:14 localhost podman[293121]: 2025-11-28 09:53:14.252510051 +0000 UTC m=+0.246448533 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:53:14 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:53:14 localhost ceph-mon[287604]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:53:14 localhost ceph-mon[287604]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:53:14 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:14 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:14 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:14 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:14 localhost ceph-mon[287604]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:53:14 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:53:14 localhost ceph-mon[287604]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:53:15 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:15 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:15 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:15 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:16 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:53:16 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:53:16 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:16 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:16 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:17 localhost ceph-mon[287604]: Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:53:17 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:53:17 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:17 localhost ceph-mon[287604]: from='mgr.17142 ' entity='mgr.np0005538514.djozup' Nov 28 04:53:17 localhost ceph-mon[287604]: from='mgr.17142 172.18.0.107:0/992118114' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:53:17 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e88 e88: 6 total, 6 up, 6 in Nov 28 04:53:17 localhost ceph-mgr[286188]: mgr handle_mgr_map Activating! Nov 28 04:53:17 localhost ceph-mgr[286188]: mgr handle_mgr_map I am now activating Nov 28 04:53:18 localhost ceph-mgr[286188]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: balancer Nov 28 04:53:18 localhost ceph-mgr[286188]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: [balancer INFO root] Starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_09:53:18 Nov 28 04:53:18 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 04:53:18 localhost ceph-mgr[286188]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Nov 28 04:53:18 localhost systemd-logind[763]: Session 65 logged out. Waiting for processes to exit. Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: cephadm Nov 28 04:53:18 localhost ceph-mgr[286188]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: crash Nov 28 04:53:18 localhost ceph-mgr[286188]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: devicehealth Nov 28 04:53:18 localhost ceph-mgr[286188]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: iostat Nov 28 04:53:18 localhost ceph-mgr[286188]: [devicehealth INFO root] Starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: nfs Nov 28 04:53:18 localhost ceph-mgr[286188]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: orchestrator Nov 28 04:53:18 localhost ceph-mgr[286188]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: pg_autoscaler Nov 28 04:53:18 localhost ceph-mgr[286188]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: progress Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: [progress INFO root] Loading... Nov 28 04:53:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 04:53:18 localhost ceph-mgr[286188]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Nov 28 04:53:18 localhost ceph-mgr[286188]: [progress INFO root] Loaded OSDMap, ready. Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] recovery thread starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] starting setup Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: rbd_support Nov 28 04:53:18 localhost ceph-mgr[286188]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: restful Nov 28 04:53:18 localhost ceph-mgr[286188]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: status Nov 28 04:53:18 localhost ceph-mgr[286188]: [restful INFO root] server_addr: :: server_port: 8003 Nov 28 04:53:18 localhost ceph-mgr[286188]: [restful WARNING root] server not running: no certificate configured Nov 28 04:53:18 localhost ceph-mgr[286188]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: telemetry Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] PerfHandler: starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: vms, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: volumes, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: images, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: backups, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] TaskHandler: starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Nov 28 04:53:18 localhost ceph-mgr[286188]: [rbd_support INFO root] setup complete Nov 28 04:53:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 04:53:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 04:53:18 localhost ceph-mgr[286188]: mgr load Constructed class from module: volumes Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.260+0000 7f3200b77640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.260+0000 7f3200b77640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.260+0000 7f3200b77640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.260+0000 7f3200b77640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.260+0000 7f3200b77640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.261+0000 7f3204b7f640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.261+0000 7f3204b7f640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.261+0000 7f3204b7f640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.261+0000 7f3204b7f640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:53:18.261+0000 7f3204b7f640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:53:18 localhost podman[293324]: Nov 28 04:53:18 localhost podman[293324]: 2025-11-28 09:53:18.304220935 +0000 UTC m=+0.072048668 container create d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chaplygin, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, release=553, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:53:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:53:18 localhost systemd[1]: Started libpod-conmon-d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52.scope. Nov 28 04:53:18 localhost systemd[1]: Started libcrun container. Nov 28 04:53:18 localhost podman[293324]: 2025-11-28 09:53:18.282566838 +0000 UTC m=+0.050394661 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:18 localhost sshd[293386]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:53:18 localhost systemd[1]: tmp-crun.cvoGQF.mount: Deactivated successfully. Nov 28 04:53:18 localhost podman[293369]: 2025-11-28 09:53:18.39635359 +0000 UTC m=+0.068893521 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:53:18 localhost podman[293369]: 2025-11-28 09:53:18.405596254 +0000 UTC m=+0.078136185 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:53:18 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:53:18 localhost podman[293324]: 2025-11-28 09:53:18.422467033 +0000 UTC m=+0.190294776 container init d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chaplygin, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main) Nov 28 04:53:18 localhost podman[293324]: 2025-11-28 09:53:18.434512143 +0000 UTC m=+0.202339886 container start d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chaplygin, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, RELEASE=main, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, distribution-scope=public, release=553, architecture=x86_64) Nov 28 04:53:18 localhost podman[293324]: 2025-11-28 09:53:18.43473174 +0000 UTC m=+0.202559503 container attach d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chaplygin, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, vcs-type=git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=) Nov 28 04:53:18 localhost systemd[1]: libpod-d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52.scope: Deactivated successfully. Nov 28 04:53:18 localhost exciting_chaplygin[293379]: 167 167 Nov 28 04:53:18 localhost podman[293324]: 2025-11-28 09:53:18.439635431 +0000 UTC m=+0.207463194 container died d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chaplygin, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Nov 28 04:53:18 localhost systemd-logind[763]: New session 66 of user ceph-admin. Nov 28 04:53:18 localhost systemd[1]: Started Session 66 of User ceph-admin. Nov 28 04:53:18 localhost podman[293401]: 2025-11-28 09:53:18.517395363 +0000 UTC m=+0.070998735 container remove d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_chaplygin, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, name=rhceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main) Nov 28 04:53:18 localhost systemd[1]: libpod-conmon-d9255291562c6a222a500ed68e4a06693d1234b5a66802f27ea28d27fc845f52.scope: Deactivated successfully. Nov 28 04:53:18 localhost systemd[1]: session-65.scope: Deactivated successfully. Nov 28 04:53:18 localhost systemd[1]: session-65.scope: Consumed 6.588s CPU time. Nov 28 04:53:18 localhost systemd-logind[763]: Removed session 65. Nov 28 04:53:18 localhost ceph-mon[287604]: from='client.? 172.18.0.200:0/1038640921' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:53:18 localhost ceph-mon[287604]: Activating manager daemon np0005538515.yfkzhl Nov 28 04:53:18 localhost ceph-mon[287604]: from='client.? 172.18.0.200:0/1038640921' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 28 04:53:18 localhost ceph-mon[287604]: Manager daemon np0005538515.yfkzhl is now available Nov 28 04:53:18 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/mirror_snapshot_schedule"} : dispatch Nov 28 04:53:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/mirror_snapshot_schedule"} : dispatch Nov 28 04:53:18 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/trash_purge_schedule"} : dispatch Nov 28 04:53:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/trash_purge_schedule"} : dispatch Nov 28 04:53:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:19 localhost systemd[1]: var-lib-containers-storage-overlay-50a76ba9299418af860d305dcb8580e6d6bdc14f92426abdd892d36b9e2159d5-merged.mount: Deactivated successfully. Nov 28 04:53:19 localhost systemd[1]: tmp-crun.mGVAsw.mount: Deactivated successfully. Nov 28 04:53:19 localhost podman[293533]: 2025-11-28 09:53:19.535125534 +0000 UTC m=+0.109167219 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=) Nov 28 04:53:19 localhost podman[293533]: 2025-11-28 09:53:19.639489525 +0000 UTC m=+0.213531220 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=553, GIT_CLEAN=True, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , version=7, GIT_BRANCH=main, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public) Nov 28 04:53:19 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:53:19] ENGINE Bus STARTING Nov 28 04:53:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:53:19] ENGINE Bus STARTING Nov 28 04:53:19 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:53:19] ENGINE Serving on https://172.18.0.108:7150 Nov 28 04:53:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:53:19] ENGINE Serving on https://172.18.0.108:7150 Nov 28 04:53:19 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:53:19] ENGINE Client ('172.18.0.108', 34804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:53:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:53:19] ENGINE Client ('172.18.0.108', 34804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:53:19 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:53:19] ENGINE Serving on http://172.18.0.108:8765 Nov 28 04:53:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:53:19] ENGINE Serving on http://172.18.0.108:8765 Nov 28 04:53:19 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:53:19] ENGINE Bus STARTED Nov 28 04:53:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:53:19] ENGINE Bus STARTED Nov 28 04:53:20 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:20 localhost ceph-mgr[286188]: [devicehealth INFO root] Check health Nov 28 04:53:20 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:21 localhost ceph-mon[287604]: [28/Nov/2025:09:53:19] ENGINE Bus STARTING Nov 28 04:53:21 localhost ceph-mon[287604]: [28/Nov/2025:09:53:19] ENGINE Serving on https://172.18.0.108:7150 Nov 28 04:53:21 localhost ceph-mon[287604]: [28/Nov/2025:09:53:19] ENGINE Client ('172.18.0.108', 34804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:53:21 localhost ceph-mon[287604]: [28/Nov/2025:09:53:19] ENGINE Serving on http://172.18.0.108:8765 Nov 28 04:53:21 localhost ceph-mon[287604]: [28/Nov/2025:09:53:19] ENGINE Bus STARTED Nov 28 04:53:21 localhost ceph-mon[287604]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Nov 28 04:53:21 localhost ceph-mon[287604]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Nov 28 04:53:21 localhost ceph-mon[287604]: Cluster is now healthy Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:53:22 localhost podman[293881]: 2025-11-28 09:53:22.213376283 +0000 UTC m=+0.084708857 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 28 04:53:22 localhost podman[293881]: 2025-11-28 09:53:22.230476459 +0000 UTC m=+0.101809033 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Nov 28 04:53:22 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:53:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd/host:np0005538511", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:53:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:53:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538511.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538511.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mgr.np0005538514.djozup 172.18.0.107:0/1408760265; not ready for session (expect reconnect) Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:53:23 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:23 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:53:23 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:23 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:24 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:24 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:24 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:24 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:24 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:24 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:53:24 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:24 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:24 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:24 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.527546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323604527671, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 973, "num_deletes": 276, "total_data_size": 5014026, "memory_usage": 5158144, "flush_reason": "Manual Compaction"} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323604550573, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3201627, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15869, "largest_seqno": 16837, "table_properties": {"data_size": 3196696, "index_size": 2334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11990, "raw_average_key_size": 20, "raw_value_size": 3186115, "raw_average_value_size": 5372, "num_data_blocks": 96, "num_entries": 593, "num_filter_entries": 593, "num_deletions": 275, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323593, "oldest_key_time": 1764323593, "file_creation_time": 1764323604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 23086 microseconds, and 8622 cpu microseconds. Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.550654) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3201627 bytes OK Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.550691) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.552173) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.552200) EVENT_LOG_v1 {"time_micros": 1764323604552192, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.552235) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 5008595, prev total WAL file size 5008595, number of live WAL files 2. Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.553605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303231' seq:72057594037927935, type:22 .. '6B760031323936' seq:0, type:0; will stop at (end) Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3126KB)], [21(14MB)] Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323604553662, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 18016293, "oldest_snapshot_seqno": -1} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10142 keys, 16854249 bytes, temperature: kUnknown Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323604692988, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16854249, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16796604, "index_size": 31176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 271822, "raw_average_key_size": 26, "raw_value_size": 16623397, "raw_average_value_size": 1639, "num_data_blocks": 1176, "num_entries": 10142, "num_filter_entries": 10142, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.693624) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16854249 bytes Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.695245) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.0 rd, 120.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 14.1 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(10.9) write-amplify(5.3) OK, records in: 10725, records dropped: 583 output_compression: NoCompression Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.695279) EVENT_LOG_v1 {"time_micros": 1764323604695264, "job": 10, "event": "compaction_finished", "compaction_time_micros": 139707, "compaction_time_cpu_micros": 47360, "output_level": 6, "num_output_files": 1, "total_output_size": 16854249, "num_input_records": 10725, "num_output_records": 10142, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323604695871, "job": 10, "event": "table_file_deletion", "file_number": 23} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323604698469, "job": 10, "event": "table_file_deletion", "file_number": 21} Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.553494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.698558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.698566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.698569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.698572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:24 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:24.698574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:24 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 3d74b371-4d29-468f-84f1-3b2a0094d080 (Updating node-proxy deployment (+5 -> 5)) Nov 28 04:53:24 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 3d74b371-4d29-468f-84f1-3b2a0094d080 (Updating node-proxy deployment (+5 -> 5)) Nov 28 04:53:24 localhost ceph-mgr[286188]: [progress INFO root] Completed event 3d74b371-4d29-468f-84f1-3b2a0094d080 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 28 04:53:25 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538511 (monmap changed)... Nov 28 04:53:25 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538511 (monmap changed)... Nov 28 04:53:25 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538511 on np0005538511.localdomain Nov 28 04:53:25 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538511 on np0005538511.localdomain Nov 28 04:53:25 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:25 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:25 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:25 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:25 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:25 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:25 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:26 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:53:26 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:53:26 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Nov 28 04:53:26 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:53:26 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:53:26 localhost ceph-mon[287604]: Reconfiguring mon.np0005538511 (monmap changed)... Nov 28 04:53:26 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538511 on np0005538511.localdomain Nov 28 04:53:26 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:26 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:26 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:26 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:53:26 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:53:27 localhost ceph-mon[287604]: Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:53:27 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:53:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:27 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:53:27 localhost podman[294539]: Nov 28 04:53:27 localhost podman[294539]: 2025-11-28 09:53:27.501055592 +0000 UTC m=+0.076046450 container create 3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_gates, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, CEPH_POINT_RELEASE=) Nov 28 04:53:27 localhost systemd[1]: Started libpod-conmon-3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f.scope. Nov 28 04:53:27 localhost systemd[1]: Started libcrun container. Nov 28 04:53:27 localhost podman[294539]: 2025-11-28 09:53:27.469986196 +0000 UTC m=+0.044977114 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:27 localhost podman[294539]: 2025-11-28 09:53:27.570090556 +0000 UTC m=+0.145081444 container init 3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_gates, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=553, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph) Nov 28 04:53:27 localhost podman[294539]: 2025-11-28 09:53:27.583359924 +0000 UTC m=+0.158350802 container start 3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_gates, release=553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, RELEASE=main, ceph=True, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, version=7, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc.) Nov 28 04:53:27 localhost podman[294539]: 2025-11-28 09:53:27.583667173 +0000 UTC m=+0.158658051 container attach 3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_gates, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vcs-type=git, version=7, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, architecture=x86_64, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, release=553) Nov 28 04:53:27 localhost openstack_network_exporter[240973]: ERROR 09:53:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:53:27 localhost openstack_network_exporter[240973]: ERROR 09:53:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:53:27 localhost condescending_gates[294555]: 167 167 Nov 28 04:53:27 localhost openstack_network_exporter[240973]: ERROR 09:53:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:53:27 localhost openstack_network_exporter[240973]: Nov 28 04:53:27 localhost openstack_network_exporter[240973]: ERROR 09:53:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:53:27 localhost openstack_network_exporter[240973]: Nov 28 04:53:27 localhost openstack_network_exporter[240973]: ERROR 09:53:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:53:27 localhost systemd[1]: libpod-3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f.scope: Deactivated successfully. Nov 28 04:53:27 localhost podman[294539]: 2025-11-28 09:53:27.593562988 +0000 UTC m=+0.168553916 container died 3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_gates, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, architecture=x86_64, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:53:27 localhost podman[294560]: 2025-11-28 09:53:27.688124508 +0000 UTC m=+0.089318599 container remove 3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_gates, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, vcs-type=git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_BRANCH=main, architecture=x86_64) Nov 28 04:53:27 localhost systemd[1]: libpod-conmon-3154c5605efc9d661938a904f14de1f2d5ef44c3f4363417b521ecc15b0a569f.scope: Deactivated successfully. Nov 28 04:53:27 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:53:27 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:53:28 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Nov 28 04:53:28 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:53:28 localhost nova_compute[280168]: 2025-11-28 09:53:28.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:28 localhost systemd[1]: var-lib-containers-storage-overlay-2d96f11932743d2290c86ea0c88a5761137714a02bc99806d2167da30e3e694d-merged.mount: Deactivated successfully. Nov 28 04:53:28 localhost podman[294637]: Nov 28 04:53:28 localhost podman[294637]: 2025-11-28 09:53:28.632730679 +0000 UTC m=+0.078635050 container create 5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_mclaren, release=553, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux , architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:53:28 localhost systemd[1]: Started libpod-conmon-5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835.scope. Nov 28 04:53:28 localhost systemd[1]: Started libcrun container. Nov 28 04:53:28 localhost podman[294637]: 2025-11-28 09:53:28.603819259 +0000 UTC m=+0.049723710 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:28 localhost podman[294637]: 2025-11-28 09:53:28.706481698 +0000 UTC m=+0.152386079 container init 5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_mclaren, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, RELEASE=main, release=553, name=rhceph) Nov 28 04:53:28 localhost podman[294637]: 2025-11-28 09:53:28.721102847 +0000 UTC m=+0.167007218 container start 5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_mclaren, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=) Nov 28 04:53:28 localhost podman[294637]: 2025-11-28 09:53:28.721334514 +0000 UTC m=+0.167238905 container attach 5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_mclaren, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55) Nov 28 04:53:28 localhost gracious_mclaren[294653]: 167 167 Nov 28 04:53:28 localhost systemd[1]: libpod-5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835.scope: Deactivated successfully. Nov 28 04:53:28 localhost podman[294637]: 2025-11-28 09:53:28.726624567 +0000 UTC m=+0.172528978 container died 5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_mclaren, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, release=553, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64) Nov 28 04:53:28 localhost ceph-mon[287604]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:53:28 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:28 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:28 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:28 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:28 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:53:28 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:28 localhost podman[294658]: 2025-11-28 09:53:28.825239641 +0000 UTC m=+0.089418852 container remove 5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gracious_mclaren, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, GIT_CLEAN=True, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:53:28 localhost systemd[1]: libpod-conmon-5d3fc0e73786ac04e99ae2a2cf52f3cdb1ae69807e577cfe5064444338fc8835.scope: Deactivated successfully. Nov 28 04:53:28 localhost podman[239012]: time="2025-11-28T09:53:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:53:28 localhost podman[239012]: @ - - [28/Nov/2025:09:53:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:53:28 localhost podman[239012]: @ - - [28/Nov/2025:09:53:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19182 "" "Go-http-client/1.1" Nov 28 04:53:29 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:53:29 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:53:29 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:53:29 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:53:29 localhost nova_compute[280168]: 2025-11-28 09:53:29.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:29 localhost nova_compute[280168]: 2025-11-28 09:53:29.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:29 localhost nova_compute[280168]: 2025-11-28 09:53:29.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:53:29 localhost systemd[1]: var-lib-containers-storage-overlay-4d73b939258e9b68251a3951bc8e1bc9aab1a0e720e35b723a768032b8802c7a-merged.mount: Deactivated successfully. Nov 28 04:53:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.26686 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 28 04:53:29 localhost podman[294733]: Nov 28 04:53:29 localhost podman[294733]: 2025-11-28 09:53:29.729098416 +0000 UTC m=+0.083179821 container create 3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_rubin, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True) Nov 28 04:53:29 localhost systemd[1]: Started libpod-conmon-3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92.scope. Nov 28 04:53:29 localhost systemd[1]: Started libcrun container. Nov 28 04:53:29 localhost podman[294733]: 2025-11-28 09:53:29.695931363 +0000 UTC m=+0.050012768 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:29 localhost podman[294733]: 2025-11-28 09:53:29.795683529 +0000 UTC m=+0.149764934 container init 3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_rubin, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.33.12, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main) Nov 28 04:53:29 localhost podman[294733]: 2025-11-28 09:53:29.806979074 +0000 UTC m=+0.161060479 container start 3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_rubin, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, RELEASE=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.component=rhceph-container, release=553, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:53:29 localhost podman[294733]: 2025-11-28 09:53:29.807272563 +0000 UTC m=+0.161353968 container attach 3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_rubin, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, architecture=x86_64, vcs-type=git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, distribution-scope=public, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553) Nov 28 04:53:29 localhost goofy_rubin[294748]: 167 167 Nov 28 04:53:29 localhost systemd[1]: libpod-3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92.scope: Deactivated successfully. Nov 28 04:53:29 localhost podman[294733]: 2025-11-28 09:53:29.811794471 +0000 UTC m=+0.165875896 container died 3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_rubin, maintainer=Guillaume Abrioux , GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55) Nov 28 04:53:29 localhost ceph-mon[287604]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:53:29 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:29 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:29 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:29 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:29 localhost ceph-mon[287604]: Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:53:29 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:29 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:53:29 localhost podman[294753]: 2025-11-28 09:53:29.913926929 +0000 UTC m=+0.085743269 container remove 3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=goofy_rubin, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:53:29 localhost systemd[1]: libpod-conmon-3aa78de2ffdb57bff6acf9c771439917cfea82968b421f9d6d450effe4efcb92.scope: Deactivated successfully. Nov 28 04:53:30 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Nov 28 04:53:30 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev f82a4739-45e5-48d5-a015-d6c0e4fb423d (Updating node-proxy deployment (+5 -> 5)) Nov 28 04:53:30 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev f82a4739-45e5-48d5-a015-d6c0e4fb423d (Updating node-proxy deployment (+5 -> 5)) Nov 28 04:53:30 localhost ceph-mgr[286188]: [progress INFO root] Completed event f82a4739-45e5-48d5-a015-d6c0e4fb423d (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 28 04:53:30 localhost nova_compute[280168]: 2025-11-28 09:53:30.235 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:30 localhost ceph-mon[287604]: mon.np0005538515@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:30 localhost systemd[1]: var-lib-containers-storage-overlay-19ee31ce6adbd2aa96b1c98f5a2ea2693bb00d872a460e3313d25c2af0f56d2b-merged.mount: Deactivated successfully. Nov 28 04:53:31 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:31 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:31 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:53:31 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:31 localhost nova_compute[280168]: 2025-11-28 09:53:31.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:32 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 28 04:53:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.44214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005538511", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 28 04:53:33 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.264 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.265 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.287 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.287 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.288 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.288 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.289 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:53:33 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:53:33 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/479351308' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.703 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:53:33 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.26706 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005538511"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:53:33 localhost ceph-mgr[286188]: [cephadm INFO root] Remove daemons mon.np0005538511 Nov 28 04:53:33 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005538511 Nov 28 04:53:33 localhost ceph-mgr[286188]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005538511: new quorum should be ['np0005538512', 'np0005538515', 'np0005538514', 'np0005538513'] (from ['np0005538512', 'np0005538515', 'np0005538514', 'np0005538513']) Nov 28 04:53:33 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005538511: new quorum should be ['np0005538512', 'np0005538515', 'np0005538514', 'np0005538513'] (from ['np0005538512', 'np0005538515', 'np0005538514', 'np0005538513']) Nov 28 04:53:33 localhost ceph-mgr[286188]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005538511 from monmap... Nov 28 04:53:33 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removing monitor np0005538511 from monmap... Nov 28 04:53:33 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005538511 from np0005538511.localdomain -- ports [] Nov 28 04:53:33 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005538511 from np0005538511.localdomain -- ports [] Nov 28 04:53:33 localhost ceph-mgr[286188]: client.34353 ms_handle_reset on v2:172.18.0.107:3300/0 Nov 28 04:53:33 localhost ceph-mon[287604]: mon.np0005538515@2(peon) e10 my rank is now 1 (was 2) Nov 28 04:53:33 localhost ceph-mgr[286188]: client.34382 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:53:33 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:53:33 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:53:33 localhost ceph-mgr[286188]: client.34353 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.908 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.910 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11995MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.910 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:53:33 localhost nova_compute[280168]: 2025-11-28 09:53:33.911 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:53:33 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:53:33 localhost ceph-mon[287604]: paxos.1).electionLogic(38) init, last seen epoch 38 Nov 28 04:53:33 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:53:34 localhost nova_compute[280168]: 2025-11-28 09:53:34.003 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:53:34 localhost nova_compute[280168]: 2025-11-28 09:53:34.003 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:53:34 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 28 04:53:34 localhost nova_compute[280168]: 2025-11-28 09:53:34.041 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:53:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:53:34 localhost podman[294821]: 2025-11-28 09:53:34.992184122 +0000 UTC m=+0.091516645 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 28 04:53:35 localhost podman[294821]: 2025-11-28 09:53:35.031632936 +0000 UTC m=+0.130965499 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 04:53:35 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e10 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:53:35 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:35 localhost ceph-mon[287604]: Remove daemons mon.np0005538511 Nov 28 04:53:35 localhost ceph-mon[287604]: Safe to remove mon.np0005538511: new quorum should be ['np0005538512', 'np0005538515', 'np0005538514', 'np0005538513'] (from ['np0005538512', 'np0005538515', 'np0005538514', 'np0005538513']) Nov 28 04:53:35 localhost ceph-mon[287604]: Removing monitor np0005538511 from monmap... Nov 28 04:53:35 localhost ceph-mon[287604]: Removing daemon mon.np0005538511 from np0005538511.localdomain -- ports [] Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:53:35 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538515,np0005538514,np0005538513 in quorum (ranks 0,1,2,3) Nov 28 04:53:35 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:53:35 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:53:36 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 28 04:53:36 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:36 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:37 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:37 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:37 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:37 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:37 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:53:37 localhost ceph-mon[287604]: Updating np0005538511.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:37 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.26988 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005538511.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:53:37 localhost ceph-mgr[286188]: [cephadm INFO root] Removed label mon from host np0005538511.localdomain Nov 28 04:53:37 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removed label mon from host np0005538511.localdomain Nov 28 04:53:37 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 26e86fa6-16ca-4f04-a584-7b02c5af7818 (Updating node-proxy deployment (+5 -> 5)) Nov 28 04:53:37 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 26e86fa6-16ca-4f04-a584-7b02c5af7818 (Updating node-proxy deployment (+5 -> 5)) Nov 28 04:53:37 localhost ceph-mgr[286188]: [progress INFO root] Completed event 26e86fa6-16ca-4f04-a584-7b02c5af7818 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds Nov 28 04:53:37 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e10 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:53:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3363422915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:53:37 localhost nova_compute[280168]: 2025-11-28 09:53:37.523 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:53:37 localhost nova_compute[280168]: 2025-11-28 09:53:37.530 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:53:37 localhost nova_compute[280168]: 2025-11-28 09:53:37.548 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:53:37 localhost nova_compute[280168]: 2025-11-28 09:53:37.550 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:53:37 localhost nova_compute[280168]: 2025-11-28 09:53:37.550 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:53:37 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:53:37 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:53:37 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:53:37 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:53:38 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:38 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:53:38 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:38 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:38 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:38 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: Removed label mon from host np0005538511.localdomain Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538511.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:38 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:38 localhost nova_compute[280168]: 2025-11-28 09:53:38.547 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:38 localhost nova_compute[280168]: 2025-11-28 09:53:38.548 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:38 localhost nova_compute[280168]: 2025-11-28 09:53:38.548 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:53:38 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:53:38 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:53:38 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:53:38 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:53:39 localhost ceph-mon[287604]: Reconfiguring crash.np0005538511 (monmap changed)... Nov 28 04:53:39 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538511 on np0005538511.localdomain Nov 28 04:53:39 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:39 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:39 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:39 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538511.fvuybw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:39 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:53:39 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:53:39 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:53:39 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:53:40 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:40 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:40 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538511.fvuybw (monmap changed)... Nov 28 04:53:40 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538511.fvuybw on np0005538511.localdomain Nov 28 04:53:40 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:40 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:40 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:40 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:53:40 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:53:40 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:53:40 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:53:41 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:53:41 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:53:41 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:53:41 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:53:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.27004 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005538511.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:53:41 localhost ceph-mgr[286188]: [cephadm INFO root] Removed label mgr from host np0005538511.localdomain Nov 28 04:53:41 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005538511.localdomain Nov 28 04:53:41 localhost ceph-mon[287604]: Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:53:41 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:41 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:42 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:42 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:53:42 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:53:42 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:53:42 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:53:42 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:53:42 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:53:42 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:53:42 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:53:42 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:42 localhost ceph-mon[287604]: Removed label mgr from host np0005538511.localdomain Nov 28 04:53:42 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:42 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:42 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:53:42 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:42 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:42 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:53:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.44263 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005538511.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:53:42 localhost ceph-mgr[286188]: [cephadm INFO root] Removed label _admin from host np0005538511.localdomain Nov 28 04:53:42 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005538511.localdomain Nov 28 04:53:43 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Nov 28 04:53:43 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Nov 28 04:53:43 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:53:43 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:53:43 localhost ceph-mon[287604]: Removed label _admin from host np0005538511.localdomain Nov 28 04:53:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:43 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:53:43 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:53:43 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:53:44 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:44 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Nov 28 04:53:44 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Nov 28 04:53:44 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:53:44 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.527906) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323624527954, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1079, "num_deletes": 257, "total_data_size": 1714968, "memory_usage": 1736800, "flush_reason": "Manual Compaction"} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323624540015, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 995848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16842, "largest_seqno": 17916, "table_properties": {"data_size": 990602, "index_size": 2589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13536, "raw_average_key_size": 21, "raw_value_size": 979270, "raw_average_value_size": 1561, "num_data_blocks": 108, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323604, "oldest_key_time": 1764323604, "file_creation_time": 1764323624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 12218 microseconds, and 4806 cpu microseconds. Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.540123) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 995848 bytes OK Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.540148) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.541880) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.541904) EVENT_LOG_v1 {"time_micros": 1764323624541897, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.541926) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1709186, prev total WAL file size 1709510, number of live WAL files 2. Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.542595) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353138' seq:72057594037927935, type:22 .. '6C6F676D0033373731' seq:0, type:0; will stop at (end) Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(972KB)], [24(16MB)] Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323624542643, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 17850097, "oldest_snapshot_seqno": -1} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10222 keys, 17704597 bytes, temperature: kUnknown Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323624700382, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 17704597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17644868, "index_size": 33068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 275203, "raw_average_key_size": 26, "raw_value_size": 17468689, "raw_average_value_size": 1708, "num_data_blocks": 1256, "num_entries": 10222, "num_filter_entries": 10222, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323624, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.700719) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 17704597 bytes Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.702978) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.1 rd, 112.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.1 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(35.7) write-amplify(17.8) OK, records in: 10769, records dropped: 547 output_compression: NoCompression Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.703012) EVENT_LOG_v1 {"time_micros": 1764323624702998, "job": 12, "event": "compaction_finished", "compaction_time_micros": 157865, "compaction_time_cpu_micros": 43908, "output_level": 6, "num_output_files": 1, "total_output_size": 17704597, "num_input_records": 10769, "num_output_records": 10222, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323624703430, "job": 12, "event": "table_file_deletion", "file_number": 26} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323624705936, "job": 12, "event": "table_file_deletion", "file_number": 24} Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.542498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.706030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.706037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.706042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.706045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:44 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:53:44.706050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:53:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:53:44 localhost podman[295197]: 2025-11-28 09:53:44.986837599 +0000 UTC m=+0.073967340 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:53:44 localhost podman[295197]: 2025-11-28 09:53:44.991566833 +0000 UTC m=+0.078696584 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:53:45 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:53:45 localhost podman[295189]: 2025-11-28 09:53:45.030932234 +0000 UTC m=+0.128893076 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Nov 28 04:53:45 localhost podman[295190]: 2025-11-28 09:53:45.046567762 +0000 UTC m=+0.139588693 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:53:45 localhost podman[295191]: 2025-11-28 09:53:45.089193143 +0000 UTC m=+0.180099979 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:53:45 localhost podman[295189]: 2025-11-28 09:53:45.112651509 +0000 UTC m=+0.210612391 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 28 04:53:45 localhost podman[295191]: 2025-11-28 09:53:45.123602334 +0000 UTC m=+0.214509220 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:53:45 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:53:45 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:53:45 localhost podman[295190]: 2025-11-28 09:53:45.175119697 +0000 UTC m=+0.268140598 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:53:45 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:53:45 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:45 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:45 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:53:45 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:53:45 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:53:45 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:53:45 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:53:45 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:53:45 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:53:45 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:46 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:46 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:53:46 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:53:46 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:53:46 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:46 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:46 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:46 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:47 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:53:47 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:53:47 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:53:47 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:53:47 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:53:47 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:53:47 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:47 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:47 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:48 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:48 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:53:48 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:53:48 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:53:48 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:53:48 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:53:48 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:53:48 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:53:48 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:53:48 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:53:48 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:53:48 localhost ceph-mon[287604]: Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:53:48 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:53:48 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:48 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:48 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:48 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:53:48 localhost podman[295275]: 2025-11-28 09:53:48.978663912 +0000 UTC m=+0.083250622 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:53:48 localhost podman[295275]: 2025-11-28 09:53:48.988246065 +0000 UTC m=+0.092832825 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:53:49 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:53:49 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Nov 28 04:53:49 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Nov 28 04:53:49 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:53:49 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:53:49 localhost ceph-mon[287604]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:53:49 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:53:49 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:49 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:49 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:53:50 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:50 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Nov 28 04:53:50 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Nov 28 04:53:50 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:53:50 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:53:50 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:50 localhost ceph-mon[287604]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:53:50 localhost ceph-mon[287604]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:53:50 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:50 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:50 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:53:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:53:50.836 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:53:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:53:50.836 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:53:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:53:50.836 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:53:51 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:53:51 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:53:51 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:53:51 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:53:51 localhost ceph-mon[287604]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:53:51 localhost ceph-mon[287604]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:53:51 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:51 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:51 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:51 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:52 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:52 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:53:52 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:53:52 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:53:52 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:53:52 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:53:52 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:53:52 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:52 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:52 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:53:52 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:52 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:52 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:53:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:53:52 localhost podman[295297]: 2025-11-28 09:53:52.977209932 +0000 UTC m=+0.086146211 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible) Nov 28 04:53:52 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:53:53 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:53:53 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:53:53 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:53:53 localhost podman[295297]: 2025-11-28 09:53:53.012812909 +0000 UTC m=+0.121749168 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 04:53:53 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:53:53 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:53:53 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:53:53 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:53:53 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:53 localhost ceph-mon[287604]: Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:53 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:53 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:53:54 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:54 localhost podman[295368]: Nov 28 04:53:54 localhost podman[295368]: 2025-11-28 09:53:54.432245915 +0000 UTC m=+0.075719163 container create bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_mclean, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , release=553, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:53:54 localhost systemd[1]: Started libpod-conmon-bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977.scope. Nov 28 04:53:54 localhost systemd[1]: Started libcrun container. Nov 28 04:53:54 localhost podman[295368]: 2025-11-28 09:53:54.400847247 +0000 UTC m=+0.044320505 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:54 localhost podman[295368]: 2025-11-28 09:53:54.506875464 +0000 UTC m=+0.150348712 container init bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_mclean, GIT_CLEAN=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:53:54 localhost systemd[1]: tmp-crun.cjrOXk.mount: Deactivated successfully. Nov 28 04:53:54 localhost podman[295368]: 2025-11-28 09:53:54.518743386 +0000 UTC m=+0.162216644 container start bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_mclean, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , RELEASE=main, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2025-09-24T08:57:55) Nov 28 04:53:54 localhost podman[295368]: 2025-11-28 09:53:54.519057086 +0000 UTC m=+0.162530374 container attach bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_mclean, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, version=7, description=Red Hat Ceph Storage 7) Nov 28 04:53:54 localhost clever_mclean[295383]: 167 167 Nov 28 04:53:54 localhost systemd[1]: libpod-bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977.scope: Deactivated successfully. Nov 28 04:53:54 localhost podman[295368]: 2025-11-28 09:53:54.523921304 +0000 UTC m=+0.167394582 container died bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_mclean, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git) Nov 28 04:53:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.34406 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005538511.localdomain", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:53:54 localhost ceph-mgr[286188]: [cephadm INFO root] Added label _no_schedule to host np0005538511.localdomain Nov 28 04:53:54 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005538511.localdomain Nov 28 04:53:54 localhost ceph-mgr[286188]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005538511.localdomain Nov 28 04:53:54 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005538511.localdomain Nov 28 04:53:54 localhost podman[295388]: 2025-11-28 09:53:54.62010281 +0000 UTC m=+0.081555330 container remove bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_mclean, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, maintainer=Guillaume Abrioux , architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, RELEASE=main) Nov 28 04:53:54 localhost systemd[1]: libpod-conmon-bbc0f5fc8e016680b340a7fb7f767057686d2f53a4bbd11aa24d3013d050d977.scope: Deactivated successfully. Nov 28 04:53:54 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Nov 28 04:53:54 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Nov 28 04:53:54 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:53:54 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:53:55 localhost podman[295457]: Nov 28 04:53:55 localhost podman[295457]: 2025-11-28 09:53:55.319473923 +0000 UTC m=+0.078980722 container create 299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_margulis, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True) Nov 28 04:53:55 localhost systemd[1]: Started libpod-conmon-299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be.scope. Nov 28 04:53:55 localhost systemd[1]: Started libcrun container. Nov 28 04:53:55 localhost podman[295457]: 2025-11-28 09:53:55.287895629 +0000 UTC m=+0.047402458 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:55 localhost podman[295457]: 2025-11-28 09:53:55.389890013 +0000 UTC m=+0.149396822 container init 299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_margulis, ceph=True, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.openshift.expose-services=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:53:55 localhost podman[295457]: 2025-11-28 09:53:55.399502896 +0000 UTC m=+0.159009695 container start 299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_margulis, RELEASE=main, vcs-type=git, build-date=2025-09-24T08:57:55, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, release=553, GIT_BRANCH=main, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=) Nov 28 04:53:55 localhost podman[295457]: 2025-11-28 09:53:55.399764155 +0000 UTC m=+0.159271004 container attach 299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_margulis, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=553, vcs-type=git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public, version=7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, ceph=True, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.) Nov 28 04:53:55 localhost vibrant_margulis[295473]: 167 167 Nov 28 04:53:55 localhost systemd[1]: libpod-299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be.scope: Deactivated successfully. Nov 28 04:53:55 localhost podman[295457]: 2025-11-28 09:53:55.40322575 +0000 UTC m=+0.162732579 container died 299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_margulis, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, build-date=2025-09-24T08:57:55, distribution-scope=public, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:53:55 localhost systemd[1]: var-lib-containers-storage-overlay-845026d1e22dbb3000217c3565c8162f2b9c0611bf0287605636b758bb1c7f64-merged.mount: Deactivated successfully. Nov 28 04:53:55 localhost systemd[1]: var-lib-containers-storage-overlay-555eec11d45b1d65bc67a897d6d8d0cf128587c4e8c678984188d52936a1ca1e-merged.mount: Deactivated successfully. Nov 28 04:53:55 localhost podman[295478]: 2025-11-28 09:53:55.505158802 +0000 UTC m=+0.089588756 container remove 299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vibrant_margulis, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, release=553, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, maintainer=Guillaume Abrioux , vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=) Nov 28 04:53:55 localhost systemd[1]: libpod-conmon-299525f745800ad238445a9b92a87bf9cbf5e6bdb9cebcdbe445a8c0cea443be.scope: Deactivated successfully. Nov 28 04:53:55 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:53:55 localhost ceph-mon[287604]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:53:55 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:53:55 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:55 localhost ceph-mon[287604]: Added label _no_schedule to host np0005538511.localdomain Nov 28 04:53:55 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:55 localhost ceph-mon[287604]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005538511.localdomain Nov 28 04:53:55 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:55 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:55 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:53:55 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Nov 28 04:53:55 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Nov 28 04:53:55 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:53:55 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:53:56 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.44275 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005538511.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 28 04:53:56 localhost podman[295554]: Nov 28 04:53:56 localhost podman[295554]: 2025-11-28 09:53:56.388890024 +0000 UTC m=+0.092973761 container create 82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_heisenberg, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, RELEASE=main, release=553, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:53:56 localhost systemd[1]: Started libpod-conmon-82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff.scope. Nov 28 04:53:56 localhost systemd[1]: Started libcrun container. Nov 28 04:53:56 localhost podman[295554]: 2025-11-28 09:53:56.345300072 +0000 UTC m=+0.049383859 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:56 localhost podman[295554]: 2025-11-28 09:53:56.452564507 +0000 UTC m=+0.156648254 container init 82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_heisenberg, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhceph ceph, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:53:56 localhost systemd[1]: tmp-crun.bj4qVR.mount: Deactivated successfully. Nov 28 04:53:56 localhost affectionate_heisenberg[295569]: 167 167 Nov 28 04:53:56 localhost podman[295554]: 2025-11-28 09:53:56.468989649 +0000 UTC m=+0.173073386 container start 82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_heisenberg, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, release=553, version=7, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main) Nov 28 04:53:56 localhost podman[295554]: 2025-11-28 09:53:56.472217317 +0000 UTC m=+0.176301054 container attach 82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_heisenberg, version=7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.buildah.version=1.33.12, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=) Nov 28 04:53:56 localhost systemd[1]: libpod-82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff.scope: Deactivated successfully. Nov 28 04:53:56 localhost podman[295554]: 2025-11-28 09:53:56.476183099 +0000 UTC m=+0.180266846 container died 82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_heisenberg, name=rhceph, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:53:56 localhost ceph-mon[287604]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:53:56 localhost ceph-mon[287604]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:53:56 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:56 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:56 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:53:56 localhost podman[295575]: 2025-11-28 09:53:56.580108982 +0000 UTC m=+0.094175597 container remove 82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_heisenberg, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, release=553, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:53:56 localhost systemd[1]: libpod-conmon-82c453d50cd775b9a7d1f84ad7f12079d1decccb836e85feed7879b444d5a8ff.scope: Deactivated successfully. Nov 28 04:53:56 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:53:56 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:53:56 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:53:56 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:53:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.27016 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005538511.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:53:57 localhost systemd[1]: var-lib-containers-storage-overlay-52f8975212decd8be6dc0b515937d3e942ee1fb747319e206520e5abfa566202-merged.mount: Deactivated successfully. Nov 28 04:53:57 localhost podman[295651]: Nov 28 04:53:57 localhost podman[295651]: 2025-11-28 09:53:57.461127679 +0000 UTC m=+0.081458838 container create 6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_galois, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, RELEASE=main, ceph=True, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:53:57 localhost ceph-mgr[286188]: [cephadm INFO root] Removed host np0005538511.localdomain Nov 28 04:53:57 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removed host np0005538511.localdomain Nov 28 04:53:57 localhost systemd[1]: Started libpod-conmon-6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33.scope. Nov 28 04:53:57 localhost systemd[1]: Started libcrun container. Nov 28 04:53:57 localhost podman[295651]: 2025-11-28 09:53:57.428615757 +0000 UTC m=+0.048946966 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:57 localhost podman[295651]: 2025-11-28 09:53:57.535333595 +0000 UTC m=+0.155664764 container init 6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_galois, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:53:57 localhost podman[295651]: 2025-11-28 09:53:57.544137114 +0000 UTC m=+0.164468283 container start 6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_galois, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=553, architecture=x86_64, ceph=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph) Nov 28 04:53:57 localhost podman[295651]: 2025-11-28 09:53:57.544354921 +0000 UTC m=+0.164686080 container attach 6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_galois, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, release=553, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, RELEASE=main) Nov 28 04:53:57 localhost friendly_galois[295665]: 167 167 Nov 28 04:53:57 localhost systemd[1]: libpod-6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33.scope: Deactivated successfully. Nov 28 04:53:57 localhost podman[295651]: 2025-11-28 09:53:57.547028022 +0000 UTC m=+0.167359211 container died 6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_galois, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, ceph=True, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, architecture=x86_64, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12) Nov 28 04:53:57 localhost openstack_network_exporter[240973]: ERROR 09:53:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:53:57 localhost openstack_network_exporter[240973]: ERROR 09:53:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:53:57 localhost openstack_network_exporter[240973]: ERROR 09:53:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:53:57 localhost openstack_network_exporter[240973]: ERROR 09:53:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:53:57 localhost openstack_network_exporter[240973]: Nov 28 04:53:57 localhost ceph-mon[287604]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:53:57 localhost ceph-mon[287604]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain"} : dispatch Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain"} : dispatch Nov 28 04:53:57 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain"}]': finished Nov 28 04:53:57 localhost openstack_network_exporter[240973]: ERROR 09:53:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:53:57 localhost openstack_network_exporter[240973]: Nov 28 04:53:57 localhost podman[295670]: 2025-11-28 09:53:57.659190197 +0000 UTC m=+0.098458247 container remove 6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_galois, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=553, maintainer=Guillaume Abrioux , RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True) Nov 28 04:53:57 localhost systemd[1]: libpod-conmon-6da8e88072a59f66abdd6475679524c010da64c988328a48c6931baa52e0ab33.scope: Deactivated successfully. Nov 28 04:53:57 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:53:57 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:53:57 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:53:57 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:53:58 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:53:58 localhost podman[295739]: Nov 28 04:53:58 localhost podman[295739]: 2025-11-28 09:53:58.417960633 +0000 UTC m=+0.101037516 container create 80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_liskov, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, vcs-type=git, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, release=553, ceph=True, io.openshift.expose-services=, version=7, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux ) Nov 28 04:53:58 localhost systemd[1]: var-lib-containers-storage-overlay-e3a1fe58ba7564743a15137bff4bfdb2920f385846a277197347c506c906651c-merged.mount: Deactivated successfully. Nov 28 04:53:58 localhost systemd[1]: Started libpod-conmon-80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81.scope. Nov 28 04:53:58 localhost podman[295739]: 2025-11-28 09:53:58.360198109 +0000 UTC m=+0.043274972 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:58 localhost systemd[1]: Started libcrun container. Nov 28 04:53:58 localhost podman[295739]: 2025-11-28 09:53:58.47977851 +0000 UTC m=+0.162855433 container init 80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_liskov, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, release=553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:53:58 localhost podman[295739]: 2025-11-28 09:53:58.488573098 +0000 UTC m=+0.171649981 container start 80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_liskov, distribution-scope=public, version=7, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:53:58 localhost podman[295739]: 2025-11-28 09:53:58.488784455 +0000 UTC m=+0.171861368 container attach 80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_liskov, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:53:58 localhost determined_liskov[295754]: 167 167 Nov 28 04:53:58 localhost systemd[1]: libpod-80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81.scope: Deactivated successfully. Nov 28 04:53:58 localhost podman[295739]: 2025-11-28 09:53:58.49158816 +0000 UTC m=+0.174665043 container died 80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_liskov, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, distribution-scope=public, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55) Nov 28 04:53:58 localhost podman[295759]: 2025-11-28 09:53:58.583966881 +0000 UTC m=+0.082168840 container remove 80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_liskov, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, name=rhceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12) Nov 28 04:53:58 localhost systemd[1]: libpod-conmon-80095e06da07e006eeb4b26d1f0c428930efe3584acc7c2469bd7ca8d9451f81.scope: Deactivated successfully. Nov 28 04:53:58 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:53:58 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:53:58 localhost ceph-mon[287604]: Removed host np0005538511.localdomain Nov 28 04:53:58 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:58 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:58 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:53:58 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:58 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:53:58 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:53:58 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:53:58 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:53:58 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:53:58 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:53:58 localhost podman[239012]: time="2025-11-28T09:53:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:53:58 localhost podman[239012]: @ - - [28/Nov/2025:09:53:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:53:58 localhost podman[239012]: @ - - [28/Nov/2025:09:53:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19184 "" "Go-http-client/1.1" Nov 28 04:53:59 localhost podman[295827]: Nov 28 04:53:59 localhost podman[295827]: 2025-11-28 09:53:59.297812305 +0000 UTC m=+0.071459832 container create 878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_shaw, name=rhceph, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=553, vcs-type=git, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.buildah.version=1.33.12, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:53:59 localhost systemd[1]: Started libpod-conmon-878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f.scope. Nov 28 04:53:59 localhost systemd[1]: Started libcrun container. Nov 28 04:53:59 localhost podman[295827]: 2025-11-28 09:53:59.357785456 +0000 UTC m=+0.131433003 container init 878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_shaw, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, release=553, architecture=x86_64, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., distribution-scope=public) Nov 28 04:53:59 localhost podman[295827]: 2025-11-28 09:53:59.367546885 +0000 UTC m=+0.141194432 container start 878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_shaw, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, version=7, description=Red Hat Ceph Storage 7, release=553, vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12) Nov 28 04:53:59 localhost podman[295827]: 2025-11-28 09:53:59.368011379 +0000 UTC m=+0.141658926 container attach 878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_shaw, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:53:59 localhost sharp_shaw[295842]: 167 167 Nov 28 04:53:59 localhost systemd[1]: libpod-878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f.scope: Deactivated successfully. Nov 28 04:53:59 localhost podman[295827]: 2025-11-28 09:53:59.272035969 +0000 UTC m=+0.045683586 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:53:59 localhost podman[295827]: 2025-11-28 09:53:59.371777383 +0000 UTC m=+0.145424960 container died 878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_shaw, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, release=553, vcs-type=git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, io.buildah.version=1.33.12, vendor=Red Hat, Inc., ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:53:59 localhost systemd[1]: var-lib-containers-storage-overlay-cd1f7e3d49597e9f104c7e3900ac85ef7c088cbdae7a111ba7e0174b9b127e95-merged.mount: Deactivated successfully. Nov 28 04:53:59 localhost systemd[1]: var-lib-containers-storage-overlay-290bfb5861a484f53ee709498fbf8ec350d102dd052678ad92fc7c121f1c7740-merged.mount: Deactivated successfully. Nov 28 04:53:59 localhost podman[295847]: 2025-11-28 09:53:59.471594801 +0000 UTC m=+0.088005887 container remove 878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sharp_shaw, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, version=7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, release=553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, RELEASE=main, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=) Nov 28 04:53:59 localhost systemd[1]: libpod-conmon-878bcbed3afc22c5199174dda88beb1dbdd0e9e14bb4bed886d2daa7ec33bd3f.scope: Deactivated successfully. Nov 28 04:53:59 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 24d3134e-2595-4e0c-aeac-80c1e796ab9d (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:53:59 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 24d3134e-2595-4e0c-aeac-80c1e796ab9d (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:53:59 localhost ceph-mgr[286188]: [progress INFO root] Completed event 24d3134e-2595-4e0c-aeac-80c1e796ab9d (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:59 localhost ceph-mon[287604]: Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:53:59 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:53:59 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:00 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:00 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:54:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:54:02 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:03 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:54:04 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:05 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:05 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:54:05 localhost podman[295879]: 2025-11-28 09:54:05.988822418 +0000 UTC m=+0.091529246 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal) Nov 28 04:54:06 localhost podman[295879]: 2025-11-28 09:54:06.005328601 +0000 UTC m=+0.108035419 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 04:54:06 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:54:06 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.27028 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:54:06 localhost ceph-mgr[286188]: [cephadm INFO root] Saving service mon spec with placement label:mon Nov 28 04:54:06 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon Nov 28 04:54:06 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev cd2a798e-179c-47da-9819-8e78128c7f2d (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:54:06 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev cd2a798e-179c-47da-9819-8e78128c7f2d (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:54:06 localhost ceph-mgr[286188]: [progress INFO root] Completed event cd2a798e-179c-47da-9819-8e78128c7f2d (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 28 04:54:07 localhost ceph-mon[287604]: Saving service mon spec with placement label:mon Nov 28 04:54:07 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:07 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:54:07 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.27036 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005538514", "target": ["mon-mgr", ""], "format": "json"}]: dispatch Nov 28 04:54:08 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.44295 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005538514"], "force": true, "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO root] Remove daemons mon.np0005538514 Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005538514 Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005538514: new quorum should be ['np0005538512', 'np0005538515', 'np0005538513'] (from ['np0005538512', 'np0005538515', 'np0005538513']) Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005538514: new quorum should be ['np0005538512', 'np0005538515', 'np0005538513'] (from ['np0005538512', 'np0005538515', 'np0005538513']) Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005538514 from monmap... Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removing monitor np0005538514 from monmap... Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005538514 from np0005538514.localdomain -- ports [] Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005538514 from np0005538514.localdomain -- ports [] Nov 28 04:54:09 localhost ceph-mgr[286188]: client.34353 ms_handle_reset on v2:172.18.0.103:3300/0 Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:54:09 localhost ceph-mon[287604]: paxos.1).electionLogic(40) init, last seen epoch 40 Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538512"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538512"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538513"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538513"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538515"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538515"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538512"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538512"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538513"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538513"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538515"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538515"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:54:09 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:54:09 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:10 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:10 localhost ceph-mon[287604]: Remove daemons mon.np0005538514 Nov 28 04:54:10 localhost ceph-mon[287604]: Safe to remove mon.np0005538514: new quorum should be ['np0005538512', 'np0005538515', 'np0005538513'] (from ['np0005538512', 'np0005538515', 'np0005538513']) Nov 28 04:54:10 localhost ceph-mon[287604]: Removing monitor np0005538514 from monmap... Nov 28 04:54:10 localhost ceph-mon[287604]: Removing daemon mon.np0005538514 from np0005538514.localdomain -- ports [] Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538515,np0005538513 in quorum (ranks 0,1,2) Nov 28 04:54:10 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:54:10 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:10 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:10 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:10 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:54:10 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:54:10 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:54:10 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 455a3ce3-e7f5-4e5b-a411-77ecee04021a (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:54:10 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 455a3ce3-e7f5-4e5b-a411-77ecee04021a (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:54:10 localhost ceph-mgr[286188]: [progress INFO root] Completed event 455a3ce3-e7f5-4e5b-a411-77ecee04021a (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 28 04:54:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:54:10 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:54:11 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:54:11 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:54:11 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:54:11 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:11 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:54:11 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:54:11 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:11 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:11 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:54:11 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:54:11 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:11 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:11 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:11 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:11 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:12 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:12 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:54:12 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:54:12 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:54:12 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:54:12 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:54:12 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:12 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:12 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:12 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:54:12 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:54:12 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:54:12 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:54:12 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:12 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:12 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:12 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:13 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:54:13 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:54:13 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:54:13 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:54:13 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:54:13 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:13 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:13 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:13 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:54:13 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:54:13 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 04:54:13 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/921452423' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 04:54:13 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 04:54:13 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/921452423' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 04:54:13 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:54:13 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:54:13 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:13 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:13 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:13 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:14 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:14 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:14 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:14 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)... Nov 28 04:54:14 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)... Nov 28 04:54:14 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Nov 28 04:54:14 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:54:14 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:14 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:14 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:54:14 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:54:14 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:54:14 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:54:14 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:54:14 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:54:14 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:14 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:14 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:54:14 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.570506) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323654570608, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1325, "num_deletes": 252, "total_data_size": 2258387, "memory_usage": 2295248, "flush_reason": "Manual Compaction"} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323654580581, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1301650, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17921, "largest_seqno": 19241, "table_properties": {"data_size": 1295703, "index_size": 3097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15968, "raw_average_key_size": 22, "raw_value_size": 1282660, "raw_average_value_size": 1798, "num_data_blocks": 132, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323624, "oldest_key_time": 1764323624, "file_creation_time": 1764323654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 10118 microseconds, and 3685 cpu microseconds. Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.580647) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1301650 bytes OK Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.580677) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.582746) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.582770) EVENT_LOG_v1 {"time_micros": 1764323654582764, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.582792) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2251512, prev total WAL file size 2251512, number of live WAL files 2. Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.583710) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end) Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1271KB)], [27(16MB)] Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323654583755, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19006247, "oldest_snapshot_seqno": -1} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 10399 keys, 15806710 bytes, temperature: kUnknown Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323654688042, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15806710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15747263, "index_size": 32338, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 280251, "raw_average_key_size": 26, "raw_value_size": 15569414, "raw_average_value_size": 1497, "num_data_blocks": 1224, "num_entries": 10399, "num_filter_entries": 10399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323654, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.688391) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15806710 bytes Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.690194) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 151.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 16.9 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(26.7) write-amplify(12.1) OK, records in: 10935, records dropped: 536 output_compression: NoCompression Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.690225) EVENT_LOG_v1 {"time_micros": 1764323654690210, "job": 14, "event": "compaction_finished", "compaction_time_micros": 104414, "compaction_time_cpu_micros": 42673, "output_level": 6, "num_output_files": 1, "total_output_size": 15806710, "num_input_records": 10935, "num_output_records": 10399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323654690598, "job": 14, "event": "table_file_deletion", "file_number": 29} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323654693234, "job": 14, "event": "table_file_deletion", "file_number": 27} Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.583628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.693340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.693348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.693351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.693353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:54:14 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:54:14.693356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:54:15 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:15 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:15 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)... Nov 28 04:54:15 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)... Nov 28 04:54:15 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Nov 28 04:54:15 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:54:15 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:15 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:15 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:54:15 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:54:15 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:15 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:54:15 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:54:15 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:15 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:15 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:54:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:54:15 localhost systemd[1]: tmp-crun.t6jZTz.mount: Deactivated successfully. Nov 28 04:54:16 localhost podman[296255]: 2025-11-28 09:54:16.001249416 +0000 UTC m=+0.106516113 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute) Nov 28 04:54:16 localhost podman[296255]: 2025-11-28 09:54:16.03740375 +0000 UTC m=+0.142670437 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute) Nov 28 04:54:16 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:16 localhost podman[296257]: 2025-11-28 09:54:16.050943263 +0000 UTC m=+0.147482123 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 04:54:16 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:54:16 localhost podman[296257]: 2025-11-28 09:54:16.062518726 +0000 UTC m=+0.159057646 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 04:54:16 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:54:16 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:16 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:16 localhost podman[296256]: 2025-11-28 09:54:16.13665603 +0000 UTC m=+0.239808883 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:54:16 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:54:16 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:54:16 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 28 04:54:16 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:16 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:16 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:16 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:54:16 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:54:16 localhost podman[296258]: 2025-11-28 09:54:16.115776012 +0000 UTC m=+0.209092634 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:54:16 localhost podman[296256]: 2025-11-28 09:54:16.179288731 +0000 UTC m=+0.282441584 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 04:54:16 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:54:16 localhost podman[296258]: 2025-11-28 09:54:16.198598831 +0000 UTC m=+0.291915413 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:54:16 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:54:16 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:54:16 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:54:16 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:16 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:16 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:16 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:17 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:54:17 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:17 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:54:17 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:54:17 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:54:17 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:54:17 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:17 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:17 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:54:17 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:17 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:17 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:17 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:54:17 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:17 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:17 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:17 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:54:17 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:54:18 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:18 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_09:54:18 Nov 28 04:54:18 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 04:54:18 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 04:54:18 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_metadata', 'images', 'vms', '.mgr', 'backups', 'volumes', 'manila_data'] Nov 28 04:54:18 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:54:18 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16) Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:54:18 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:54:18 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:54:18 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:54:18 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:54:18 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:54:18 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:18 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:18 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)... Nov 28 04:54:18 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)... Nov 28 04:54:18 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Nov 28 04:54:18 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:54:18 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:18 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:18 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:54:18 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:18 localhost ceph-mon[287604]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:18 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:18 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:54:19 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:19 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:19 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)... Nov 28 04:54:19 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Nov 28 04:54:19 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:54:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)... Nov 28 04:54:19 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:19 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:19 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:54:19 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:54:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:54:19 localhost podman[296340]: 2025-11-28 09:54:19.98026432 +0000 UTC m=+0.081146469 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:54:19 localhost podman[296340]: 2025-11-28 09:54:19.99535837 +0000 UTC m=+0.096240499 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:54:20 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:54:20 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:20 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:20 localhost ceph-mon[287604]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:54:20 localhost ceph-mon[287604]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:54:20 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:20 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:20 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:54:20 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:20 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:20 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:54:20 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:54:20 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 28 04:54:20 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:20 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:20 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:20 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:54:20 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:54:21 localhost ceph-mon[287604]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:54:21 localhost ceph-mon[287604]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:54:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:21 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:21 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:21 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:21 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:54:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:54:21 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:54:21 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:21 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:54:21 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:54:21 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:21 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:21 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:54:21 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:54:22 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:22 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.44312 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005538514.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Nov 28 04:54:22 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:22 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:54:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:54:22 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:54:22 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:54:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:22 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:54:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:22 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:54:22 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:22 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:54:22 localhost ceph-mon[287604]: Deploying daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:54:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:54:22 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:22 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:22 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:22 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:54:22 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:54:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:54:23 localhost systemd[1]: tmp-crun.Nbu9Wm.mount: Deactivated successfully. Nov 28 04:54:23 localhost podman[296423]: Nov 28 04:54:23 localhost podman[296415]: 2025-11-28 09:54:23.457207695 +0000 UTC m=+0.100031996 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:54:23 localhost podman[296423]: 2025-11-28 09:54:23.474200633 +0000 UTC m=+0.095446425 container create f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_gagarin, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, name=rhceph, version=7, vendor=Red Hat, Inc., ceph=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph) Nov 28 04:54:23 localhost podman[296415]: 2025-11-28 09:54:23.496511484 +0000 UTC m=+0.139335665 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:54:23 localhost systemd[1]: Started libpod-conmon-f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943.scope. Nov 28 04:54:23 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:54:23 localhost systemd[1]: Started libcrun container. Nov 28 04:54:23 localhost podman[296423]: 2025-11-28 09:54:23.434678307 +0000 UTC m=+0.055924139 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:54:23 localhost podman[296423]: 2025-11-28 09:54:23.544879911 +0000 UTC m=+0.166125683 container init f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_gagarin, GIT_CLEAN=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:54:23 localhost podman[296423]: 2025-11-28 09:54:23.553536256 +0000 UTC m=+0.174782028 container start f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_gagarin, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, vcs-type=git, GIT_BRANCH=main, RELEASE=main, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, version=7, vendor=Red Hat, Inc., release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64) Nov 28 04:54:23 localhost podman[296423]: 2025-11-28 09:54:23.553745012 +0000 UTC m=+0.174990814 container attach f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_gagarin, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , distribution-scope=public, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:54:23 localhost hungry_gagarin[296451]: 167 167 Nov 28 04:54:23 localhost systemd[1]: libpod-f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943.scope: Deactivated successfully. Nov 28 04:54:23 localhost podman[296423]: 2025-11-28 09:54:23.557452315 +0000 UTC m=+0.178698087 container died f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_gagarin, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_BRANCH=main, version=7, GIT_CLEAN=True, release=553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:54:23 localhost podman[296456]: 2025-11-28 09:54:23.649341851 +0000 UTC m=+0.077679613 container remove f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_gagarin, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:54:23 localhost systemd[1]: libpod-conmon-f12aa6ba1fa8cd1c5125ef7d5bcf0f25712f23b87a3e9b2f4c7150de76e39943.scope: Deactivated successfully. Nov 28 04:54:23 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:23 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)... Nov 28 04:54:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)... Nov 28 04:54:23 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Nov 28 04:54:23 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:54:23 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:23 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:23 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:54:23 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:23 localhost ceph-mon[287604]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:54:23 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:54:23 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:24 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:24 localhost podman[296526]: Nov 28 04:54:24 localhost podman[296526]: 2025-11-28 09:54:24.38431638 +0000 UTC m=+0.081712916 container create 6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_bartik, release=553, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, name=rhceph, version=7, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux ) Nov 28 04:54:24 localhost systemd[1]: Started libpod-conmon-6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6.scope. Nov 28 04:54:24 localhost systemd[1]: var-lib-containers-storage-overlay-22e2b03ed48c04aec873e7d78813026a2a95a8b0b1fba7b042fd31c7dae6d406-merged.mount: Deactivated successfully. Nov 28 04:54:24 localhost systemd[1]: Started libcrun container. Nov 28 04:54:24 localhost podman[296526]: 2025-11-28 09:54:24.350267821 +0000 UTC m=+0.047664387 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:54:24 localhost podman[296526]: 2025-11-28 09:54:24.464061545 +0000 UTC m=+0.161458091 container init 6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_bartik, version=7, io.buildah.version=1.33.12, RELEASE=main, ceph=True, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, release=553, vcs-type=git) Nov 28 04:54:24 localhost podman[296526]: 2025-11-28 09:54:24.474542985 +0000 UTC m=+0.171939531 container start 6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_bartik, ceph=True, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Nov 28 04:54:24 localhost podman[296526]: 2025-11-28 09:54:24.474905376 +0000 UTC m=+0.172301902 container attach 6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_bartik, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:54:24 localhost affectionate_bartik[296541]: 167 167 Nov 28 04:54:24 localhost systemd[1]: libpod-6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6.scope: Deactivated successfully. Nov 28 04:54:24 localhost podman[296526]: 2025-11-28 09:54:24.479608849 +0000 UTC m=+0.177005385 container died 6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_bartik, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Nov 28 04:54:24 localhost systemd[1]: var-lib-containers-storage-overlay-b41c8d286bc4294e55d036e66a43a0b04e5f58aa7bce2e7314b27c4c94baa87e-merged.mount: Deactivated successfully. Nov 28 04:54:24 localhost podman[296546]: 2025-11-28 09:54:24.576215039 +0000 UTC m=+0.087808362 container remove 6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_bartik, GIT_BRANCH=main, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, architecture=x86_64, io.buildah.version=1.33.12, name=rhceph, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Nov 28 04:54:24 localhost systemd[1]: libpod-conmon-6c7e296a84f0eb4a836cbfa0efeec4b137ab97a70e78ced26645e03f820167f6.scope: Deactivated successfully. Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:24 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)... Nov 28 04:54:24 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)... Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Nov 28 04:54:24 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:24 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:24 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:54:24 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:24 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:25 localhost podman[296622]: Nov 28 04:54:25 localhost podman[296622]: 2025-11-28 09:54:25.380728762 +0000 UTC m=+0.077100546 container create d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dijkstra, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.buildah.version=1.33.12, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, distribution-scope=public) Nov 28 04:54:25 localhost systemd[1]: Started libpod-conmon-d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b.scope. Nov 28 04:54:25 localhost systemd[1]: Started libcrun container. Nov 28 04:54:25 localhost podman[296622]: 2025-11-28 09:54:25.447517901 +0000 UTC m=+0.143889685 container init d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dijkstra, io.buildah.version=1.33.12, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, version=7, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True) Nov 28 04:54:25 localhost podman[296622]: 2025-11-28 09:54:25.349484668 +0000 UTC m=+0.045856482 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:54:25 localhost podman[296622]: 2025-11-28 09:54:25.459020532 +0000 UTC m=+0.155392346 container start d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dijkstra, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, architecture=x86_64, name=rhceph, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.buildah.version=1.33.12, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:54:25 localhost musing_dijkstra[296637]: 167 167 Nov 28 04:54:25 localhost systemd[1]: libpod-d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b.scope: Deactivated successfully. Nov 28 04:54:25 localhost podman[296622]: 2025-11-28 09:54:25.459418474 +0000 UTC m=+0.155790298 container attach d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dijkstra, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, io.openshift.expose-services=, ceph=True, RELEASE=main, version=7, vcs-type=git) Nov 28 04:54:25 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:25 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:25 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:25 localhost podman[296622]: 2025-11-28 09:54:25.468671616 +0000 UTC m=+0.165043440 container died d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dijkstra, release=553, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , vcs-type=git, com.redhat.component=rhceph-container) Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:25 localhost systemd[1]: var-lib-containers-storage-overlay-7f64d01fc17ec90b253bb7d538d09ea35e21e9c87aa8704788c622ea90f303ce-merged.mount: Deactivated successfully. Nov 28 04:54:25 localhost podman[296642]: 2025-11-28 09:54:25.568914727 +0000 UTC m=+0.088106471 container remove d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_dijkstra, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:54:25 localhost systemd[1]: libpod-conmon-d7779ce290bade0419fb5a1edcec7662eebb0fdca25be09ccb42d525f5d3a37b.scope: Deactivated successfully. Nov 28 04:54:25 localhost ceph-mon[287604]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:54:25 localhost ceph-mon[287604]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:54:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:25 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:54:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:25 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:25 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:54:25 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 28 04:54:25 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:25 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:25 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:54:25 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:54:26 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:26 localhost podman[296719]: Nov 28 04:54:26 localhost podman[296719]: 2025-11-28 09:54:26.416567257 +0000 UTC m=+0.075599389 container create 8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_austin, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main) Nov 28 04:54:26 localhost systemd[1]: Started libpod-conmon-8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a.scope. Nov 28 04:54:26 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:26 localhost systemd[1]: Started libcrun container. Nov 28 04:54:26 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:26 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:26 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:26 localhost podman[296719]: 2025-11-28 09:54:26.480599171 +0000 UTC m=+0.139631313 container init 8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_austin, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., release=553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:54:26 localhost podman[296719]: 2025-11-28 09:54:26.385425395 +0000 UTC m=+0.044457557 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:54:26 localhost podman[296719]: 2025-11-28 09:54:26.490632958 +0000 UTC m=+0.149665100 container start 8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_austin, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, release=553, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , vcs-type=git, GIT_BRANCH=main) Nov 28 04:54:26 localhost podman[296719]: 2025-11-28 09:54:26.490823684 +0000 UTC m=+0.149855826 container attach 8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_austin, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , ceph=True, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:54:26 localhost pedantic_austin[296734]: 167 167 Nov 28 04:54:26 localhost systemd[1]: libpod-8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a.scope: Deactivated successfully. Nov 28 04:54:26 localhost podman[296719]: 2025-11-28 09:54:26.492553897 +0000 UTC m=+0.151586059 container died 8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_austin, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, GIT_CLEAN=True, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, release=553, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main) Nov 28 04:54:26 localhost ceph-mon[287604]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:54:26 localhost ceph-mon[287604]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:54:26 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:26 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:26 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:26 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:54:26 localhost podman[296739]: 2025-11-28 09:54:26.59093141 +0000 UTC m=+0.090180164 container remove 8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_austin, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph) Nov 28 04:54:26 localhost systemd[1]: libpod-conmon-8683940fd5b947aa3511d1c5d65e9e61773580381c0ec7c84b3ca6e5e9e4115a.scope: Deactivated successfully. Nov 28 04:54:26 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:26 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:26 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:54:26 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:54:26 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:54:26 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:26 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:54:26 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:54:26 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:26 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:26 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:54:26 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:54:27 localhost podman[296808]: Nov 28 04:54:27 localhost podman[296808]: 2025-11-28 09:54:27.288872799 +0000 UTC m=+0.077091125 container create 59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_germain, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, vcs-type=git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:54:27 localhost systemd[1]: Started libpod-conmon-59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0.scope. Nov 28 04:54:27 localhost systemd[1]: Started libcrun container. Nov 28 04:54:27 localhost podman[296808]: 2025-11-28 09:54:27.349372725 +0000 UTC m=+0.137591051 container init 59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_germain, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=553, name=rhceph, io.buildah.version=1.33.12, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph) Nov 28 04:54:27 localhost podman[296808]: 2025-11-28 09:54:27.257890453 +0000 UTC m=+0.046108829 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:54:27 localhost podman[296808]: 2025-11-28 09:54:27.358728712 +0000 UTC m=+0.146947028 container start 59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_germain, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, release=553, version=7, distribution-scope=public, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=) Nov 28 04:54:27 localhost nervous_germain[296823]: 167 167 Nov 28 04:54:27 localhost podman[296808]: 2025-11-28 09:54:27.358969109 +0000 UTC m=+0.147187425 container attach 59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_germain, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=553, RELEASE=main, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:54:27 localhost systemd[1]: libpod-59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0.scope: Deactivated successfully. Nov 28 04:54:27 localhost podman[296808]: 2025-11-28 09:54:27.360236477 +0000 UTC m=+0.148454863 container died 59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_germain, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , release=553, version=7) Nov 28 04:54:27 localhost systemd[1]: var-lib-containers-storage-overlay-556a39532d0aa6de0e8a898bd2980e40c73158ebf1ec18de63bc439213223fc9-merged.mount: Deactivated successfully. Nov 28 04:54:27 localhost podman[296828]: 2025-11-28 09:54:27.450654989 +0000 UTC m=+0.081898213 container remove 59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_germain, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, version=7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, vcs-type=git, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container) Nov 28 04:54:27 localhost systemd[1]: libpod-conmon-59b563431c1c8a424ad9d3cdabcd3143e86abd92cef35eb883af1e0a3a1d7da0.scope: Deactivated successfully. Nov 28 04:54:27 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:27 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:27 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:27 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:27 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:27 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:27 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:27 localhost openstack_network_exporter[240973]: ERROR 09:54:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:54:27 localhost openstack_network_exporter[240973]: ERROR 09:54:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:54:27 localhost openstack_network_exporter[240973]: ERROR 09:54:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:54:27 localhost openstack_network_exporter[240973]: ERROR 09:54:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:54:27 localhost openstack_network_exporter[240973]: Nov 28 04:54:27 localhost openstack_network_exporter[240973]: ERROR 09:54:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:54:27 localhost openstack_network_exporter[240973]: Nov 28 04:54:27 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:54:27 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:54:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:27 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:54:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:27 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:28 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:28 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:28 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:28 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:28 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:28 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:54:28 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:54:28 localhost podman[239012]: time="2025-11-28T09:54:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:54:28 localhost podman[239012]: @ - - [28/Nov/2025:09:54:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:54:28 localhost podman[239012]: @ - - [28/Nov/2025:09:54:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19176 "" "Go-http-client/1.1" Nov 28 04:54:29 localhost nova_compute[280168]: 2025-11-28 09:54:29.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:29 localhost nova_compute[280168]: 2025-11-28 09:54:29.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:54:29 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:29 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:29 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:29 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:29 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:29 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:29 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:29 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:30 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:54:30 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:54:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:54:30 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:54:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:54:30 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 39e4d999-07d1-4255-93ab-c5fe5c86b525 (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:54:30 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 39e4d999-07d1-4255-93ab-c5fe5c86b525 (Updating node-proxy deployment (+4 -> 4)) Nov 28 04:54:30 localhost ceph-mgr[286188]: [progress INFO root] Completed event 39e4d999-07d1-4255-93ab-c5fe5c86b525 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds Nov 28 04:54:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:54:30 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:54:30 localhost nova_compute[280168]: 2025-11-28 09:54:30.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:30 localhost nova_compute[280168]: 2025-11-28 09:54:30.241 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:30 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:30 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:30 localhost ceph-mon[287604]: from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:54:30 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:30 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:30 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:30 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:31 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:31 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:31 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:31 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:31 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:32 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:32 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:32 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:32 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:32 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:33 localhost nova_compute[280168]: 2025-11-28 09:54:33.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:33 localhost nova_compute[280168]: 2025-11-28 09:54:33.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:54:33 localhost nova_compute[280168]: 2025-11-28 09:54:33.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:54:33 localhost nova_compute[280168]: 2025-11-28 09:54:33.365 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:54:33 localhost nova_compute[280168]: 2025-11-28 09:54:33.366 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:33 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:33 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:33 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:33 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:33 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:34 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.272 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.273 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.273 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.273 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.274 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:54:34 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:54:34 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:54:34 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:34 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:34 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:34 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:34 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:34 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:54:34 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2905538660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.704 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.909 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.911 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11961MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.912 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:54:34 localhost nova_compute[280168]: 2025-11-28 09:54:34.912 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.010 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.011 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.026 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:54:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:54:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4208687966' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:54:35 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:35 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.483 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:54:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.491 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:54:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.511 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.514 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:54:35 localhost nova_compute[280168]: 2025-11-28 09:54:35.514 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:54:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:36 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:36 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:54:36 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/738597505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:54:36 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:36 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:36 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:36 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:54:36 localhost systemd[1]: tmp-crun.tIhUv7.mount: Deactivated successfully. Nov 28 04:54:36 localhost podman[296974]: 2025-11-28 09:54:36.992703866 +0000 UTC m=+0.099909822 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git) Nov 28 04:54:37 localhost podman[296974]: 2025-11-28 09:54:37.008405804 +0000 UTC m=+0.115611770 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6) Nov 28 04:54:37 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:54:37 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:37 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (2) No such file or directory Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e11 adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints Nov 28 04:54:37 localhost nova_compute[280168]: 2025-11-28 09:54:37.510 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:37 localhost nova_compute[280168]: 2025-11-28 09:54:37.510 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538512"} v 0) Nov 28 04:54:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538512"} : dispatch Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538513"} v 0) Nov 28 04:54:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538513"} : dispatch Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(probing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538515"} v 0) Nov 28 04:54:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538515"} : dispatch Nov 28 04:54:37 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (22) Invalid argument Nov 28 04:54:37 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:54:37 localhost ceph-mon[287604]: paxos.1).electionLogic(42) init, last seen epoch 42 Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:37 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:38 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:38 localhost nova_compute[280168]: 2025-11-28 09:54:38.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:54:38 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:38 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:38 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:38 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (22) Invalid argument Nov 28 04:54:39 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:39 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:39 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:39 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (22) Invalid argument Nov 28 04:54:40 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:40 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:40 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:40 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:40 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (22) Invalid argument Nov 28 04:54:40 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_auth_request failed to assign global_id Nov 28 04:54:40 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_auth_request failed to assign global_id Nov 28 04:54:41 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_auth_request failed to assign global_id Nov 28 04:54:41 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:41 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:41 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:41 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (22) Invalid argument Nov 28 04:54:42 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_auth_request failed to assign global_id Nov 28 04:54:42 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:42 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:42 localhost ceph-mgr[286188]: mgr finish mon failed to return metadata for mon.np0005538514: (22) Invalid argument Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515@1(electing) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538512 calling monitor election Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538512 is new leader, mons np0005538512,np0005538515,np0005538513,np0005538514 in quorum (ranks 0,1,2,3) Nov 28 04:54:42 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:54:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.44342 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch Nov 28 04:54:42 localhost ceph-mgr[286188]: [cephadm INFO root] Reconfig service osd.default_drive_group Nov 28 04:54:42 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:42 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:54:43 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mon.np0005538514 172.18.0.107:0/2340047348; not ready for session (expect reconnect) Nov 28 04:54:43 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:54:43 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.17154 172.18.0.108:0/2342198278' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:54:43 localhost ceph-mon[287604]: Reconfig service osd.default_drive_group Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:43 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:44 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:54:44 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e12 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:54:44 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 e89: 6 total, 6 up, 6 in Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr handle_mgr_map I was active but no longer am Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn e: '/usr/bin/ceph-mgr' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 0: '/usr/bin/ceph-mgr' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 1: '-n' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 2: 'mgr.np0005538515.yfkzhl' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 3: '-f' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 4: '--setuser' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 5: 'ceph' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 6: '--setgroup' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 7: 'ceph' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 8: '--default-log-to-file=false' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 9: '--default-log-to-journald=true' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn 10: '--default-log-to-stderr=false' Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn respawning with exe /usr/bin/ceph-mgr Nov 28 04:54:44 localhost ceph-mgr[286188]: mgr respawn exe_path /proc/self/exe Nov 28 04:54:44 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:44.916+0000 7f328e5e9640 -1 mgr handle_mgr_map I was active but no longer am Nov 28 04:54:44 localhost systemd[1]: session-66.scope: Deactivated successfully. Nov 28 04:54:44 localhost systemd[1]: session-66.scope: Consumed 21.108s CPU time. Nov 28 04:54:44 localhost systemd-logind[763]: Session 66 logged out. Waiting for processes to exit. Nov 28 04:54:44 localhost systemd-logind[763]: Removed session 66. Nov 28 04:54:44 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: ignoring --setuser ceph since I am not root Nov 28 04:54:44 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: ignoring --setgroup ceph since I am not root Nov 28 04:54:45 localhost ceph-mgr[286188]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2 Nov 28 04:54:45 localhost ceph-mgr[286188]: pidfile_write: ignore empty --pid-file Nov 28 04:54:45 localhost ceph-mon[287604]: from='client.? 172.18.0.200:0/1122335753' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:54:45 localhost ceph-mon[287604]: Activating manager daemon np0005538511.fvuybw Nov 28 04:54:45 localhost ceph-mon[287604]: from='client.? 172.18.0.200:0/1122335753' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 28 04:54:45 localhost ceph-mon[287604]: from='mgr.17154 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Loading python module 'alerts' Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Loading python module 'balancer' Nov 28 04:54:45 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:45.108+0000 7fcd196e9140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Loading python module 'cephadm' Nov 28 04:54:45 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:45.178+0000 7fcd196e9140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member Nov 28 04:54:45 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Loading python module 'crash' Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Module crash has missing NOTIFY_TYPES member Nov 28 04:54:45 localhost ceph-mgr[286188]: mgr[py] Loading python module 'dashboard' Nov 28 04:54:45 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:45.848+0000 7fcd196e9140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Loading python module 'devicehealth' Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Loading python module 'diskprediction_local' Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:46.430+0000 7fcd196e9140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: from numpy import show_config as show_numpy_config Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:46.559+0000 7fcd196e9140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Loading python module 'influx' Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Module influx has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Loading python module 'insights' Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:46.617+0000 7fcd196e9140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Loading python module 'iostat' Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost ceph-mgr[286188]: mgr[py] Loading python module 'k8sevents' Nov 28 04:54:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:46.733+0000 7fcd196e9140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member Nov 28 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:54:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:54:47 localhost systemd[1]: tmp-crun.USb96H.mount: Deactivated successfully. Nov 28 04:54:47 localhost podman[297024]: 2025-11-28 09:54:47.015407077 +0000 UTC m=+0.076464596 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'localpool' Nov 28 04:54:47 localhost podman[297024]: 2025-11-28 09:54:47.052357395 +0000 UTC m=+0.113414904 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:54:47 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'mds_autoscaler' Nov 28 04:54:47 localhost podman[297023]: 2025-11-28 09:54:47.064221817 +0000 UTC m=+0.125519763 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 04:54:47 localhost podman[297025]: 2025-11-28 09:54:47.117453332 +0000 UTC m=+0.211109336 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent) Nov 28 04:54:47 localhost podman[297027]: 2025-11-28 09:54:47.174775102 +0000 UTC m=+0.225878867 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:54:47 localhost podman[297027]: 2025-11-28 09:54:47.185877251 +0000 UTC m=+0.236981056 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:54:47 localhost podman[297023]: 2025-11-28 09:54:47.197884988 +0000 UTC m=+0.259182954 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:54:47 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'mirroring' Nov 28 04:54:47 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:54:47 localhost podman[297025]: 2025-11-28 09:54:47.253243918 +0000 UTC m=+0.346899952 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:54:47 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'nfs' Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'orchestrator' Nov 28 04:54:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:47.458+0000 7fcd196e9140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'osd_perf_query' Nov 28 04:54:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:47.603+0000 7fcd196e9140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'osd_support' Nov 28 04:54:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:47.667+0000 7fcd196e9140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'pg_autoscaler' Nov 28 04:54:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:47.723+0000 7fcd196e9140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'progress' Nov 28 04:54:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:47.791+0000 7fcd196e9140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Module progress has missing NOTIFY_TYPES member Nov 28 04:54:47 localhost ceph-mgr[286188]: mgr[py] Loading python module 'prometheus' Nov 28 04:54:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:47.848+0000 7fcd196e9140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'rbd_support' Nov 28 04:54:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:48.145+0000 7fcd196e9140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'restful' Nov 28 04:54:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:48.225+0000 7fcd196e9140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'rgw' Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'rook' Nov 28 04:54:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:48.554+0000 7fcd196e9140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Module rook has missing NOTIFY_TYPES member Nov 28 04:54:48 localhost ceph-mgr[286188]: mgr[py] Loading python module 'selftest' Nov 28 04:54:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:48.972+0000 7fcd196e9140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'snap_schedule' Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.032+0000 7fcd196e9140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'stats' Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'status' Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module status has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'telegraf' Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.220+0000 7fcd196e9140 -1 mgr[py] Module status has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'telemetry' Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.279+0000 7fcd196e9140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'test_orchestrator' Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.410+0000 7fcd196e9140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'volumes' Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.552+0000 7fcd196e9140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Loading python module 'zabbix' Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.738+0000 7fcd196e9140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:54:49.794+0000 7fcd196e9140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member Nov 28 04:54:49 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5575ffdcf600 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Nov 28 04:54:50 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:54:50.837 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:54:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:54:50.837 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:54:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:54:50.837 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:54:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:54:50 localhost podman[297106]: 2025-11-28 09:54:50.976822103 +0000 UTC m=+0.079072386 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:54:50 localhost podman[297106]: 2025-11-28 09:54:50.984977822 +0000 UTC m=+0.087228095 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:54:50 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:54:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:54:53 localhost podman[297130]: 2025-11-28 09:54:53.971762151 +0000 UTC m=+0.079154028 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Nov 28 04:54:53 localhost podman[297130]: 2025-11-28 09:54:53.986430009 +0000 UTC m=+0.093821866 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:54:53 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:54:55 localhost systemd[1]: Stopping User Manager for UID 1002... Nov 28 04:54:55 localhost systemd[26783]: Activating special unit Exit the Session... Nov 28 04:54:55 localhost systemd[26783]: Removed slice User Background Tasks Slice. Nov 28 04:54:55 localhost systemd[26783]: Stopped target Main User Target. Nov 28 04:54:55 localhost systemd[26783]: Stopped target Basic System. Nov 28 04:54:55 localhost systemd[26783]: Stopped target Paths. Nov 28 04:54:55 localhost systemd[26783]: Stopped target Sockets. Nov 28 04:54:55 localhost systemd[26783]: Stopped target Timers. Nov 28 04:54:55 localhost systemd[26783]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 28 04:54:55 localhost systemd[26783]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 04:54:55 localhost systemd[26783]: Closed D-Bus User Message Bus Socket. Nov 28 04:54:55 localhost systemd[26783]: Stopped Create User's Volatile Files and Directories. Nov 28 04:54:55 localhost systemd[26783]: Removed slice User Application Slice. Nov 28 04:54:55 localhost systemd[26783]: Reached target Shutdown. Nov 28 04:54:55 localhost systemd[26783]: Finished Exit the Session. Nov 28 04:54:55 localhost systemd[26783]: Reached target Exit the Session. Nov 28 04:54:55 localhost systemd[1]: user@1002.service: Deactivated successfully. Nov 28 04:54:55 localhost systemd[1]: Stopped User Manager for UID 1002. Nov 28 04:54:55 localhost systemd[1]: user@1002.service: Consumed 12.875s CPU time, read 0B from disk, written 7.0K to disk. Nov 28 04:54:55 localhost systemd[1]: Stopping User Runtime Directory /run/user/1002... Nov 28 04:54:55 localhost systemd[1]: run-user-1002.mount: Deactivated successfully. Nov 28 04:54:55 localhost systemd[1]: user-runtime-dir@1002.service: Deactivated successfully. Nov 28 04:54:55 localhost systemd[1]: Stopped User Runtime Directory /run/user/1002. Nov 28 04:54:55 localhost systemd[1]: Removed slice User Slice of UID 1002. Nov 28 04:54:55 localhost systemd[1]: user-1002.slice: Consumed 4min 35.182s CPU time. Nov 28 04:54:55 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:54:57 localhost openstack_network_exporter[240973]: ERROR 09:54:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:54:57 localhost openstack_network_exporter[240973]: ERROR 09:54:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:54:57 localhost openstack_network_exporter[240973]: ERROR 09:54:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:54:57 localhost openstack_network_exporter[240973]: ERROR 09:54:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:54:57 localhost openstack_network_exporter[240973]: Nov 28 04:54:57 localhost openstack_network_exporter[240973]: ERROR 09:54:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:54:57 localhost openstack_network_exporter[240973]: Nov 28 04:54:58 localhost podman[239012]: time="2025-11-28T09:54:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:54:58 localhost podman[239012]: @ - - [28/Nov/2025:09:54:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:54:58 localhost podman[239012]: @ - - [28/Nov/2025:09:54:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19180 "" "Go-http-client/1.1" Nov 28 04:55:00 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:05 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:55:07 localhost podman[297151]: 2025-11-28 09:55:07.969653179 +0000 UTC m=+0.076292731 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal) Nov 28 04:55:08 localhost podman[297151]: 2025-11-28 09:55:08.011690322 +0000 UTC m=+0.118329874 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter) Nov 28 04:55:08 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:55:10 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:15 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:55:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:55:17 localhost podman[297171]: 2025-11-28 09:55:17.986319885 +0000 UTC m=+0.091954478 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 04:55:17 localhost podman[297171]: 2025-11-28 09:55:17.995244078 +0000 UTC m=+0.100878701 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 04:55:18 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:55:18 localhost podman[297173]: 2025-11-28 09:55:18.047405561 +0000 UTC m=+0.143703949 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent) Nov 28 04:55:18 localhost podman[297173]: 2025-11-28 09:55:18.052844897 +0000 UTC m=+0.149143245 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:55:18 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:55:18 localhost podman[297178]: 2025-11-28 09:55:18.103229725 +0000 UTC m=+0.196755758 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:55:18 localhost podman[297178]: 2025-11-28 09:55:18.11091873 +0000 UTC m=+0.204444813 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:55:18 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:55:18 localhost podman[297172]: 2025-11-28 09:55:18.158324537 +0000 UTC m=+0.254787900 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 04:55:18 localhost podman[297172]: 2025-11-28 09:55:18.254176253 +0000 UTC m=+0.350639606 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Nov 28 04:55:18 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:55:19 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e90 e90: 6 total, 6 up, 6 in Nov 28 04:55:19 localhost ceph-mon[287604]: Activating manager daemon np0005538513.dsfdlx Nov 28 04:55:19 localhost ceph-mon[287604]: Manager daemon np0005538511.fvuybw is unresponsive, replacing it with standby daemon np0005538513.dsfdlx Nov 28 04:55:20 localhost sshd[297251]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:55:20 localhost systemd-logind[763]: New session 67 of user ceph-admin. Nov 28 04:55:20 localhost systemd[1]: Created slice User Slice of UID 1002. Nov 28 04:55:20 localhost systemd[1]: Starting User Runtime Directory /run/user/1002... Nov 28 04:55:20 localhost systemd[1]: Finished User Runtime Directory /run/user/1002. Nov 28 04:55:20 localhost systemd[1]: Starting User Manager for UID 1002... Nov 28 04:55:20 localhost systemd[297255]: Queued start job for default target Main User Target. Nov 28 04:55:20 localhost systemd[297255]: Created slice User Application Slice. Nov 28 04:55:20 localhost systemd[297255]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 04:55:20 localhost systemd[297255]: Started Daily Cleanup of User's Temporary Directories. Nov 28 04:55:20 localhost systemd[297255]: Reached target Paths. Nov 28 04:55:20 localhost systemd[297255]: Reached target Timers. Nov 28 04:55:20 localhost systemd[297255]: Starting D-Bus User Message Bus Socket... Nov 28 04:55:20 localhost systemd[297255]: Starting Create User's Volatile Files and Directories... Nov 28 04:55:20 localhost systemd[297255]: Finished Create User's Volatile Files and Directories. Nov 28 04:55:20 localhost systemd[297255]: Listening on D-Bus User Message Bus Socket. Nov 28 04:55:20 localhost systemd[297255]: Reached target Sockets. Nov 28 04:55:20 localhost systemd[297255]: Reached target Basic System. Nov 28 04:55:20 localhost systemd[297255]: Reached target Main User Target. Nov 28 04:55:20 localhost systemd[297255]: Startup finished in 170ms. Nov 28 04:55:20 localhost systemd[1]: Started User Manager for UID 1002. Nov 28 04:55:20 localhost systemd[1]: Started Session 67 of User ceph-admin. Nov 28 04:55:20 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:20 localhost ceph-mon[287604]: Manager daemon np0005538513.dsfdlx is now available Nov 28 04:55:20 localhost ceph-mon[287604]: removing stray HostCache host record np0005538511.localdomain.devices.0 Nov 28 04:55:20 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain.devices.0"} : dispatch Nov 28 04:55:20 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain.devices.0"}]': finished Nov 28 04:55:20 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain.devices.0"} : dispatch Nov 28 04:55:20 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538511.localdomain.devices.0"}]': finished Nov 28 04:55:20 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538513.dsfdlx/mirror_snapshot_schedule"} : dispatch Nov 28 04:55:20 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538513.dsfdlx/trash_purge_schedule"} : dispatch Nov 28 04:55:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:55:21 localhost podman[297352]: 2025-11-28 09:55:21.328128684 +0000 UTC m=+0.083272794 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:55:21 localhost podman[297352]: 2025-11-28 09:55:21.338386197 +0000 UTC m=+0.093530257 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:55:21 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:55:21 localhost podman[297401]: 2025-11-28 09:55:21.609190875 +0000 UTC m=+0.096401394 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux ) Nov 28 04:55:21 localhost podman[297401]: 2025-11-28 09:55:21.729542939 +0000 UTC m=+0.216753458 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, RELEASE=main, name=rhceph, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64) Nov 28 04:55:21 localhost ceph-mon[287604]: [28/Nov/2025:09:55:21] ENGINE Bus STARTING Nov 28 04:55:21 localhost ceph-mon[287604]: [28/Nov/2025:09:55:21] ENGINE Serving on http://172.18.0.106:8765 Nov 28 04:55:22 localhost ceph-mon[287604]: [28/Nov/2025:09:55:21] ENGINE Serving on https://172.18.0.106:7150 Nov 28 04:55:22 localhost ceph-mon[287604]: [28/Nov/2025:09:55:21] ENGINE Bus STARTED Nov 28 04:55:22 localhost ceph-mon[287604]: [28/Nov/2025:09:55:21] ENGINE Client ('172.18.0.106', 60952) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:22 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:55:24 localhost podman[297678]: 2025-11-28 09:55:24.135021161 +0000 UTC m=+0.098521089 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:55:24 localhost podman[297678]: 2025-11-28 09:55:24.17233183 +0000 UTC m=+0.135831788 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 04:55:24 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd/host:np0005538512", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: Saving service mon spec with placement label:mon Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:55:24 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:55:24 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:55:24 localhost ceph-mon[287604]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:55:24 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:24 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:24 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:24 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:55:25 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:55:25 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:55:26 localhost ceph-mon[287604]: Updating np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:55:26 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:55:26 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:55:26 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:55:26 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:26 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost openstack_network_exporter[240973]: ERROR 09:55:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:55:27 localhost openstack_network_exporter[240973]: ERROR 09:55:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:55:27 localhost openstack_network_exporter[240973]: ERROR 09:55:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:55:27 localhost openstack_network_exporter[240973]: ERROR 09:55:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:55:27 localhost openstack_network_exporter[240973]: Nov 28 04:55:27 localhost openstack_network_exporter[240973]: ERROR 09:55:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:55:27 localhost openstack_network_exporter[240973]: Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:27 localhost ceph-mon[287604]: Reconfiguring mon.np0005538512 (monmap changed)... Nov 28 04:55:27 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538512 on np0005538512.localdomain Nov 28 04:55:27 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:55:28 localhost nova_compute[280168]: 2025-11-28 09:55:28.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:28 localhost nova_compute[280168]: 2025-11-28 09:55:28.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 04:55:28 localhost nova_compute[280168]: 2025-11-28 09:55:28.378 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 04:55:28 localhost podman[239012]: time="2025-11-28T09:55:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:55:28 localhost podman[239012]: @ - - [28/Nov/2025:09:55:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:55:28 localhost podman[239012]: @ - - [28/Nov/2025:09:55:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19184 "" "Go-http-client/1.1" Nov 28 04:55:29 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:29 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538512.zyhkxs (monmap changed)... Nov 28 04:55:29 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538512.zyhkxs on np0005538512.localdomain Nov 28 04:55:29 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:29 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538512.zyhkxs", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:29 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:29 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:29 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:30 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:55:30 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:55:30 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:30 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:30 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:30 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:30 localhost nova_compute[280168]: 2025-11-28 09:55:30.378 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:30 localhost nova_compute[280168]: 2025-11-28 09:55:30.378 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:30 localhost ceph-mon[287604]: mon.np0005538515@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:31 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:55:31 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:55:31 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:31 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:31 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:55:31 localhost nova_compute[280168]: 2025-11-28 09:55:31.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:31 localhost nova_compute[280168]: 2025-11-28 09:55:31.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:55:32 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:55:32 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:55:32 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:32 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:32 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:32 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:32 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:55:33 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:55:33 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:55:33 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:33 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:33 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:33 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:33 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:33 localhost nova_compute[280168]: 2025-11-28 09:55:33.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:33 localhost nova_compute[280168]: 2025-11-28 09:55:33.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:55:33 localhost nova_compute[280168]: 2025-11-28 09:55:33.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:55:33 localhost nova_compute[280168]: 2025-11-28 09:55:33.335 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:55:34 localhost nova_compute[280168]: 2025-11-28 09:55:34.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:34 localhost nova_compute[280168]: 2025-11-28 09:55:34.281 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:34 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:55:34 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:55:34 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:34 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:34 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:35 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5575ffdcef20 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0 Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@1(peon) e13 my rank is now 0 (was 1) Nov 28 04:55:35 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:55:35 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0 Nov 28 04:55:35 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x55760951e000 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:55:35 localhost ceph-mon[287604]: paxos.0).electionLogic(46) init, last seen epoch 46 Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(electing) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538513"} v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mon metadata", "id": "np0005538513"} : dispatch Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(electing) e13 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538515"} v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mon metadata", "id": "np0005538515"} : dispatch Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : mon.np0005538515 is new leader, mons np0005538515,np0005538513,np0005538514 in quorum (ranks 0,1,2) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : monmap epoch 13 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : fsid 2c5417c9-00eb-57d5-a565-ddecbc7995c1 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : last_changed 2025-11-28T09:55:34.993934+0000 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : created 2025-11-28T07:45:36.120469+0000 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : election_strategy: 1 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005538515 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005538513 Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005538514 Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005538514.umgtoy=up:active} 2 up:standby Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : osdmap e90: 6 total, 6 up, 6 in Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [DBG] : mgrmap e34: np0005538513.dsfdlx(active, since 15s), standbys: np0005538514.djozup, np0005538515.yfkzhl, np0005538512.zyhkxs Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(cluster) log [INF] : overall HEALTH_OK Nov 28 04:55:35 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:55:35 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:55:35 localhost ceph-mon[287604]: Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:55:35 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:55:35 localhost ceph-mon[287604]: Remove daemons mon.np0005538512 Nov 28 04:55:35 localhost ceph-mon[287604]: Safe to remove mon.np0005538512: new quorum should be ['np0005538515', 'np0005538513', 'np0005538514'] (from ['np0005538515', 'np0005538513', 'np0005538514']) Nov 28 04:55:35 localhost ceph-mon[287604]: Removing monitor np0005538512 from monmap... Nov 28 04:55:35 localhost ceph-mon[287604]: Removing daemon mon.np0005538512 from np0005538512.localdomain -- ports [] Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515 calling monitor election Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538514 calling monitor election Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538513 calling monitor election Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515 is new leader, mons np0005538515,np0005538513,np0005538514 in quorum (ranks 0,1,2) Nov 28 04:55:35 localhost ceph-mon[287604]: overall HEALTH_OK Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:35 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:35 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.241 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.241 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.272 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.273 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.273 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.273 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.274 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:55:36 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:55:36 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/36973857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.716 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:55:36 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:36 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:36 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:36 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:36 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) Nov 28 04:55:36 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:55:36 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:36 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:36 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:36 localhost ceph-mon[287604]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:55:36 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:36 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:55:36 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:36 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:36 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:36 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.953 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.955 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12019MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.955 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:55:36 localhost nova_compute[280168]: 2025-11-28 09:55:36.956 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.238 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.238 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.292 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.348 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.349 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.365 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.391 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.419 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:55:37 localhost ceph-mon[287604]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:55:37 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:37 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:37 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:55:37 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3157638842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.930 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.935 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.955 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.956 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:55:37 localhost nova_compute[280168]: 2025-11-28 09:55:37.957 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:55:38 localhost nova_compute[280168]: 2025-11-28 09:55:38.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:38 localhost nova_compute[280168]: 2025-11-28 09:55:38.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:38 localhost nova_compute[280168]: 2025-11-28 09:55:38.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:38 localhost nova_compute[280168]: 2025-11-28 09:55:38.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 04:55:38 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Nov 28 04:55:38 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:55:38 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:55:38 localhost ceph-mon[287604]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:55:38 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:55:38 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:38 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:38 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:38 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:38 localhost podman[298364]: 2025-11-28 09:55:38.988326996 +0000 UTC m=+0.091258507 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm) Nov 28 04:55:38 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:38 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 28 04:55:38 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:39 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:39 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:39 localhost podman[298364]: 2025-11-28 09:55:39.001401406 +0000 UTC m=+0.104332917 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 04:55:39 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:55:39 localhost nova_compute[280168]: 2025-11-28 09:55:39.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:55:39 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:39 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:39 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:55:39 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:39 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:55:39 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:55:39 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:39 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:39 localhost ceph-mon[287604]: Removed label mon from host np0005538512.localdomain Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:39 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:40 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:40 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:40 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:40 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:40 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:40 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Nov 28 04:55:40 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:55:40 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Nov 28 04:55:40 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Nov 28 04:55:40 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:40 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:41 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:55:41 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:55:41 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:41 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:41 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:55:41 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:41 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:41 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:41 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:41 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:55:41 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:41 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:41 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:42 localhost ceph-mon[287604]: Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:55:42 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:55:42 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:42 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:42 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:42 localhost podman[298437]: Nov 28 04:55:42 localhost podman[298437]: 2025-11-28 09:55:42.28838688 +0000 UTC m=+0.076314491 container create 44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_driscoll, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, release=553, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:55:42 localhost systemd[1]: Started libpod-conmon-44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc.scope. Nov 28 04:55:42 localhost podman[298437]: 2025-11-28 09:55:42.256673792 +0000 UTC m=+0.044601443 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:55:42 localhost systemd[1]: Started libcrun container. Nov 28 04:55:42 localhost podman[298437]: 2025-11-28 09:55:42.378221652 +0000 UTC m=+0.166149283 container init 44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_driscoll, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, RELEASE=main, name=rhceph, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:55:42 localhost podman[298437]: 2025-11-28 09:55:42.390251919 +0000 UTC m=+0.178179530 container start 44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_driscoll, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, com.redhat.component=rhceph-container, release=553, name=rhceph, io.buildah.version=1.33.12, distribution-scope=public, build-date=2025-09-24T08:57:55, ceph=True, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:55:42 localhost podman[298437]: 2025-11-28 09:55:42.390998773 +0000 UTC m=+0.178926384 container attach 44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_driscoll, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., ceph=True, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, description=Red Hat Ceph Storage 7) Nov 28 04:55:42 localhost boring_driscoll[298452]: 167 167 Nov 28 04:55:42 localhost systemd[1]: libpod-44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc.scope: Deactivated successfully. Nov 28 04:55:42 localhost podman[298437]: 2025-11-28 09:55:42.39517402 +0000 UTC m=+0.183101731 container died 44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_driscoll, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, ceph=True, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7) Nov 28 04:55:42 localhost podman[298457]: 2025-11-28 09:55:42.488956864 +0000 UTC m=+0.085553464 container remove 44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=boring_driscoll, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, release=553, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:55:42 localhost systemd[1]: libpod-conmon-44e2ec4883636c5d3917054ce2eb1bb43265bd75b4b1c876de4ee5bbc3af64cc.scope: Deactivated successfully. Nov 28 04:55:42 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:42 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:42 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:42 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:42 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) Nov 28 04:55:42 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:55:42 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:42 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:43 localhost ceph-mon[287604]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:55:43 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:55:43 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost podman[298527]: Nov 28 04:55:43 localhost podman[298527]: 2025-11-28 09:55:43.217105695 +0000 UTC m=+0.081228301 container create 4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_lumiere, release=553, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, build-date=2025-09-24T08:57:55, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, distribution-scope=public, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, version=7) Nov 28 04:55:43 localhost systemd[1]: Started libpod-conmon-4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb.scope. Nov 28 04:55:43 localhost systemd[1]: Started libcrun container. Nov 28 04:55:43 localhost podman[298527]: 2025-11-28 09:55:43.281249313 +0000 UTC m=+0.145371919 container init 4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_lumiere, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=553, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, distribution-scope=public) Nov 28 04:55:43 localhost podman[298527]: 2025-11-28 09:55:43.186737767 +0000 UTC m=+0.050860383 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:55:43 localhost podman[298527]: 2025-11-28 09:55:43.29198476 +0000 UTC m=+0.156107366 container start 4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_lumiere, maintainer=Guillaume Abrioux , GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, name=rhceph, release=553, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, vcs-type=git, RELEASE=main) Nov 28 04:55:43 localhost podman[298527]: 2025-11-28 09:55:43.292209447 +0000 UTC m=+0.156332053 container attach 4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_lumiere, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, name=rhceph, release=553, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7) Nov 28 04:55:43 localhost magical_lumiere[298541]: 167 167 Nov 28 04:55:43 localhost systemd[1]: var-lib-containers-storage-overlay-02efe27f3da46599feb6ac6679f8631ae01615f76ef6933d3ad577363ed5c130-merged.mount: Deactivated successfully. Nov 28 04:55:43 localhost systemd[1]: libpod-4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb.scope: Deactivated successfully. Nov 28 04:55:43 localhost podman[298527]: 2025-11-28 09:55:43.301308566 +0000 UTC m=+0.165431172 container died 4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_lumiere, vcs-type=git, RELEASE=main, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, release=553, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:55:43 localhost systemd[1]: var-lib-containers-storage-overlay-1c35d031be3c1f3ce2e41b0882ee9864fa3d2667397f4dc81097bbe536f56f13-merged.mount: Deactivated successfully. Nov 28 04:55:43 localhost podman[298546]: 2025-11-28 09:55:43.402699191 +0000 UTC m=+0.092864366 container remove 4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_lumiere, release=553, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, name=rhceph, build-date=2025-09-24T08:57:55, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main) Nov 28 04:55:43 localhost systemd[1]: libpod-conmon-4cfdfc371ff56dc8b0761658dea6bfed389b9b3576e9e8e90f62fd124a7d44eb.scope: Deactivated successfully. Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:55:43 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:43 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:44 localhost ceph-mon[287604]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:55:44 localhost ceph-mon[287604]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:55:44 localhost ceph-mon[287604]: Removed label mgr from host np0005538512.localdomain Nov 28 04:55:44 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:55:44 localhost podman[298622]: Nov 28 04:55:44 localhost podman[298622]: 2025-11-28 09:55:44.233020691 +0000 UTC m=+0.077366513 container create 59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, release=553, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.33.12, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True) Nov 28 04:55:44 localhost systemd[1]: Started libpod-conmon-59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1.scope. Nov 28 04:55:44 localhost systemd[1]: Started libcrun container. Nov 28 04:55:44 localhost podman[298622]: 2025-11-28 09:55:44.290899978 +0000 UTC m=+0.135245810 container init 59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, GIT_CLEAN=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, version=7, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:55:44 localhost podman[298622]: 2025-11-28 09:55:44.200650613 +0000 UTC m=+0.044996475 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:55:44 localhost systemd[1]: tmp-crun.jfq0cQ.mount: Deactivated successfully. Nov 28 04:55:44 localhost podman[298622]: 2025-11-28 09:55:44.307391342 +0000 UTC m=+0.151737174 container start 59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, GIT_BRANCH=main, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:55:44 localhost podman[298622]: 2025-11-28 09:55:44.307852616 +0000 UTC m=+0.152198448 container attach 59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, release=553, version=7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main) Nov 28 04:55:44 localhost pedantic_khayyam[298637]: 167 167 Nov 28 04:55:44 localhost systemd[1]: libpod-59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1.scope: Deactivated successfully. Nov 28 04:55:44 localhost podman[298622]: 2025-11-28 09:55:44.311368343 +0000 UTC m=+0.155714175 container died 59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, ceph=True) Nov 28 04:55:44 localhost podman[298642]: 2025-11-28 09:55:44.400391101 +0000 UTC m=+0.082250632 container remove 59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_khayyam, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64) Nov 28 04:55:44 localhost systemd[1]: libpod-conmon-59f186d56261c765b19a7c01f132fec690d912ab7a450b130871dba8d939d2c1.scope: Deactivated successfully. Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:44 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:44 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:45 localhost podman[298719]: Nov 28 04:55:45 localhost podman[298719]: 2025-11-28 09:55:45.192045521 +0000 UTC m=+0.075587698 container create ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mendel, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, release=553, GIT_BRANCH=main, vcs-type=git) Nov 28 04:55:45 localhost ceph-mon[287604]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:55:45 localhost ceph-mon[287604]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:55:45 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:45 localhost systemd[1]: Started libpod-conmon-ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33.scope. Nov 28 04:55:45 localhost systemd[1]: Started libcrun container. Nov 28 04:55:45 localhost podman[298719]: 2025-11-28 09:55:45.243485302 +0000 UTC m=+0.127027499 container init ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mendel, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, release=553, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7) Nov 28 04:55:45 localhost podman[298719]: 2025-11-28 09:55:45.253055373 +0000 UTC m=+0.136597570 container start ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mendel, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, ceph=True, vcs-type=git, release=553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph) Nov 28 04:55:45 localhost podman[298719]: 2025-11-28 09:55:45.253358342 +0000 UTC m=+0.136900649 container attach ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mendel, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, architecture=x86_64, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7) Nov 28 04:55:45 localhost focused_mendel[298734]: 167 167 Nov 28 04:55:45 localhost systemd[1]: libpod-ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33.scope: Deactivated successfully. Nov 28 04:55:45 localhost podman[298719]: 2025-11-28 09:55:45.256047895 +0000 UTC m=+0.139590122 container died ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mendel, distribution-scope=public, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_BRANCH=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True) Nov 28 04:55:45 localhost podman[298719]: 2025-11-28 09:55:45.162637303 +0000 UTC m=+0.046179530 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:55:45 localhost systemd[1]: tmp-crun.Gn7A0M.mount: Deactivated successfully. Nov 28 04:55:45 localhost systemd[1]: var-lib-containers-storage-overlay-356bf8abb1d168d3a6c292063641cc54d7ee0bbfdf795d9250c3691fc3519770-merged.mount: Deactivated successfully. Nov 28 04:55:45 localhost systemd[1]: var-lib-containers-storage-overlay-ae7bd569944352693d0d16bf51fef07105e9129baf4902bdc9ac56802231dfdf-merged.mount: Deactivated successfully. Nov 28 04:55:45 localhost podman[298740]: 2025-11-28 09:55:45.359447342 +0000 UTC m=+0.089561336 container remove ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_mendel, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, release=553, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, maintainer=Guillaume Abrioux ) Nov 28 04:55:45 localhost systemd[1]: libpod-conmon-ddc82d327631c3a0e384271adc277944cdd22a26b3a89fb2d4ef2baa11f19c33.scope: Deactivated successfully. Nov 28 04:55:45 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:45 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:45 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:45 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:55:45 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:45 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:55:45 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:55:45 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:45 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:45 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:46 localhost podman[298808]: Nov 28 04:55:46 localhost podman[298808]: 2025-11-28 09:55:46.030868421 +0000 UTC m=+0.078925541 container create 1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dirac, release=553, maintainer=Guillaume Abrioux , GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, ceph=True, RELEASE=main, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:55:46 localhost systemd[1]: Started libpod-conmon-1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f.scope. Nov 28 04:55:46 localhost systemd[1]: Started libcrun container. Nov 28 04:55:46 localhost podman[298808]: 2025-11-28 09:55:46.096202905 +0000 UTC m=+0.144260025 container init 1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dirac, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, io.buildah.version=1.33.12, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, RELEASE=main, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, ceph=True) Nov 28 04:55:46 localhost podman[298808]: 2025-11-28 09:55:46.000253096 +0000 UTC m=+0.048310236 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:55:46 localhost podman[298808]: 2025-11-28 09:55:46.105194611 +0000 UTC m=+0.153251731 container start 1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dirac, name=rhceph, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-09-24T08:57:55, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, io.buildah.version=1.33.12, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, com.redhat.component=rhceph-container) Nov 28 04:55:46 localhost podman[298808]: 2025-11-28 09:55:46.105468189 +0000 UTC m=+0.153525339 container attach 1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dirac, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, io.buildah.version=1.33.12, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, architecture=x86_64, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:55:46 localhost vigilant_dirac[298823]: 167 167 Nov 28 04:55:46 localhost systemd[1]: libpod-1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f.scope: Deactivated successfully. Nov 28 04:55:46 localhost podman[298808]: 2025-11-28 09:55:46.109801451 +0000 UTC m=+0.157858571 container died 1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dirac, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, distribution-scope=public) Nov 28 04:55:46 localhost podman[298828]: 2025-11-28 09:55:46.21032371 +0000 UTC m=+0.087850203 container remove 1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_dirac, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, architecture=x86_64, distribution-scope=public, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, GIT_BRANCH=main, release=553, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_CLEAN=True, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:55:46 localhost systemd[1]: libpod-conmon-1dba93a69a2672784a118699c4166db7c44dc4b27b71f05c9c4806d65303e21f.scope: Deactivated successfully. Nov 28 04:55:46 localhost ceph-mon[287604]: Removed label _admin from host np0005538512.localdomain Nov 28 04:55:46 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:55:46 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:55:46 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:46 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:46 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:46 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:46 localhost systemd[1]: var-lib-containers-storage-overlay-9f891d5a2999601500f8e6128c05dd91a0551ae2d99d65e83703de4ab238969c-merged.mount: Deactivated successfully. Nov 28 04:55:46 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:46 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:46 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:46 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Nov 28 04:55:46 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:55:46 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Nov 28 04:55:46 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Nov 28 04:55:46 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:46 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:46 localhost podman[298899]: Nov 28 04:55:46 localhost podman[298899]: 2025-11-28 09:55:46.893430776 +0000 UTC m=+0.075687682 container create 5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_lamport, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vcs-type=git, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, release=553, description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:55:46 localhost systemd[1]: Started libpod-conmon-5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7.scope. Nov 28 04:55:46 localhost systemd[1]: Started libcrun container. Nov 28 04:55:46 localhost podman[298899]: 2025-11-28 09:55:46.951685165 +0000 UTC m=+0.133942071 container init 5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_lamport, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, maintainer=Guillaume Abrioux , distribution-scope=public, architecture=x86_64) Nov 28 04:55:46 localhost podman[298899]: 2025-11-28 09:55:46.960861634 +0000 UTC m=+0.143118540 container start 5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_lamport, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, ceph=True, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, version=7, distribution-scope=public) Nov 28 04:55:46 localhost podman[298899]: 2025-11-28 09:55:46.961310078 +0000 UTC m=+0.143566994 container attach 5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_lamport, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-09-24T08:57:55, GIT_CLEAN=True, maintainer=Guillaume Abrioux , ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, release=553, version=7, io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:55:46 localhost nifty_lamport[298915]: 167 167 Nov 28 04:55:46 localhost podman[298899]: 2025-11-28 09:55:46.864526114 +0000 UTC m=+0.046783040 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:55:46 localhost systemd[1]: libpod-5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7.scope: Deactivated successfully. Nov 28 04:55:46 localhost podman[298899]: 2025-11-28 09:55:46.965505407 +0000 UTC m=+0.147762353 container died 5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_lamport, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.33.12, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , version=7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.) Nov 28 04:55:47 localhost podman[298920]: 2025-11-28 09:55:47.061148897 +0000 UTC m=+0.085138491 container remove 5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nifty_lamport, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, version=7, GIT_CLEAN=True, name=rhceph, release=553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., architecture=x86_64) Nov 28 04:55:47 localhost systemd[1]: libpod-conmon-5b30e265da750d685f4166e88773bce32d3929567f421217fad6248a512bc1a7.scope: Deactivated successfully. Nov 28 04:55:47 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:47 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:47 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:47 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:47 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:55:47 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:55:47 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:47 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:47 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:55:47 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:47 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:47 localhost systemd[1]: var-lib-containers-storage-overlay-56d6c5a9ba1509de57ef1c1ad10602d79fadf4f08d4bcfd86ec49c81770adc01-merged.mount: Deactivated successfully. Nov 28 04:55:48 localhost ceph-mon[287604]: Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:55:48 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:55:48 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:55:48 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:48 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:55:48 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:48 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:48 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:48 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:55:48 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:55:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:55:48 localhost podman[298955]: 2025-11-28 09:55:48.798507459 +0000 UTC m=+0.092886046 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 04:55:48 localhost podman[298955]: 2025-11-28 09:55:48.812492066 +0000 UTC m=+0.106870663 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 04:55:48 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:55:48 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:55:48 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:48 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:55:48 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:48 localhost podman[298956]: 2025-11-28 09:55:48.903312189 +0000 UTC m=+0.197302995 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 04:55:48 localhost podman[298956]: 2025-11-28 09:55:48.943400363 +0000 UTC m=+0.237391219 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 04:55:48 localhost podman[298957]: 2025-11-28 09:55:48.955934595 +0000 UTC m=+0.246006632 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3) Nov 28 04:55:48 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:55:49 localhost podman[298958]: 2025-11-28 09:55:49.010269005 +0000 UTC m=+0.298334550 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:55:49 localhost podman[298958]: 2025-11-28 09:55:49.019350992 +0000 UTC m=+0.307416547 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:55:49 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:55:49 localhost podman[298957]: 2025-11-28 09:55:49.037807175 +0000 UTC m=+0.327879232 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent) Nov 28 04:55:49 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:55:49 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:49 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:49 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:55:49 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:49 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:49 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:55:49 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:49 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:55:49 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:55:50 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.028808) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750028897, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2742, "num_deletes": 255, "total_data_size": 7857638, "memory_usage": 8325856, "flush_reason": "Manual Compaction"} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750059933, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 5010030, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19246, "largest_seqno": 21983, "table_properties": {"data_size": 4998674, "index_size": 7029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 29004, "raw_average_key_size": 22, "raw_value_size": 4973877, "raw_average_value_size": 3861, "num_data_blocks": 305, "num_entries": 1288, "num_filter_entries": 1288, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323655, "oldest_key_time": 1764323655, "file_creation_time": 1764323750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 31199 microseconds, and 11858 cpu microseconds. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.060019) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 5010030 bytes OK Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.060051) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.062024) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.062052) EVENT_LOG_v1 {"time_micros": 1764323750062044, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.062118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 7844585, prev total WAL file size 7876723, number of live WAL files 2. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.064103) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end) Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(4892KB)], [30(15MB)] Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750064164, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 20816740, "oldest_snapshot_seqno": -1} Nov 28 04:55:50 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11136 keys, 17521930 bytes, temperature: kUnknown Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750216310, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17521930, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17457349, "index_size": 35680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27845, "raw_key_size": 297892, "raw_average_key_size": 26, "raw_value_size": 17266374, "raw_average_value_size": 1550, "num_data_blocks": 1369, "num_entries": 11136, "num_filter_entries": 11136, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.216611) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17521930 bytes Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.218794) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.7 rd, 115.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.8, 15.1 +0.0 blob) out(16.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 11687, records dropped: 551 output_compression: NoCompression Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.218834) EVENT_LOG_v1 {"time_micros": 1764323750218822, "job": 16, "event": "compaction_finished", "compaction_time_micros": 152249, "compaction_time_cpu_micros": 47102, "output_level": 6, "num_output_files": 1, "total_output_size": 17521930, "num_input_records": 11687, "num_output_records": 11136, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750219690, "job": 16, "event": "table_file_deletion", "file_number": 32} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750221277, "job": 16, "event": "table_file_deletion", "file_number": 30} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.063969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.221327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.221335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.221338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.221341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.221344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:55:50 localhost ceph-mon[287604]: Removing np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:50 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:50 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:50 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:55:50 localhost ceph-mon[287604]: Removing np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:55:50 localhost ceph-mon[287604]: Removing np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:55:50 localhost ceph-mon[287604]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:50 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:55:50 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:50 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.570987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750571018, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 316, "num_deletes": 253, "total_data_size": 112396, "memory_usage": 120600, "flush_reason": "Manual Compaction"} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750574160, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 112571, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21984, "largest_seqno": 22299, "table_properties": {"data_size": 110437, "index_size": 309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4933, "raw_average_key_size": 16, "raw_value_size": 106110, "raw_average_value_size": 359, "num_data_blocks": 11, "num_entries": 295, "num_filter_entries": 295, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323750, "oldest_key_time": 1764323750, "file_creation_time": 1764323750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3231 microseconds, and 1170 cpu microseconds. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.574212) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 112571 bytes OK Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.574235) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.575961) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.575990) EVENT_LOG_v1 {"time_micros": 1764323750575981, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.576011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 110099, prev total WAL file size 110099, number of live WAL files 2. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.576476) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323935' seq:72057594037927935, type:22 .. '6B760031353439' seq:0, type:0; will stop at (end) Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(109KB)], [33(16MB)] Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750576514, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 17634501, "oldest_snapshot_seqno": -1} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 10908 keys, 16607435 bytes, temperature: kUnknown Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750684841, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 16607435, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16545608, "index_size": 33438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27333, "raw_key_size": 294665, "raw_average_key_size": 27, "raw_value_size": 16359786, "raw_average_value_size": 1499, "num_data_blocks": 1257, "num_entries": 10908, "num_filter_entries": 10908, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323465, "oldest_key_time": 0, "file_creation_time": 1764323750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5fedd929-5f7c-4f1d-86e7-c95af9bc6d32", "db_session_id": "18KD68ISQNH5R0YWI96C", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.685472) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 16607435 bytes Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.687315) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.5 rd, 153.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 16.7 +0.0 blob) out(15.8 +0.0 blob), read-write-amplify(304.2) write-amplify(147.5) OK, records in: 11431, records dropped: 523 output_compression: NoCompression Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.687351) EVENT_LOG_v1 {"time_micros": 1764323750687333, "job": 18, "event": "compaction_finished", "compaction_time_micros": 108491, "compaction_time_cpu_micros": 46361, "output_level": 6, "num_output_files": 1, "total_output_size": 16607435, "num_input_records": 11431, "num_output_records": 10908, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750687545, "job": 18, "event": "table_file_deletion", "file_number": 35} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323750690350, "job": 18, "event": "table_file_deletion", "file_number": 33} Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.576395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.690451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.690460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.690463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.690466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ceph-mon[287604]: rocksdb: (Original Log Time 2025/11/28-09:55:50.690468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:55:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:55:50.838 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:55:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:55:50.838 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:55:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:55:50.838 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:55:51 localhost ceph-mon[287604]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:51 localhost ceph-mon[287604]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:55:51 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:51 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:55:51 localhost podman[299340]: 2025-11-28 09:55:51.978743634 +0000 UTC m=+0.084434478 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:55:52 localhost podman[299340]: 2025-11-28 09:55:52.018477097 +0000 UTC m=+0.124167931 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:55:52 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:55:52 localhost ceph-mon[287604]: Removing daemon mgr.np0005538512.zyhkxs from np0005538512.localdomain -- ports [8765] Nov 28 04:55:52 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"} v 0) Nov 28 04:55:52 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"} : dispatch Nov 28 04:55:52 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"}]': finished Nov 28 04:55:52 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Nov 28 04:55:52 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:52 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) Nov 28 04:55:52 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:52 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:55:52 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:55:53 localhost ceph-mon[287604]: Removing key for mgr.np0005538512.zyhkxs Nov 28 04:55:53 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"} : dispatch Nov 28 04:55:53 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"}]': finished Nov 28 04:55:53 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:53 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:55:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:54 localhost podman[299399]: 2025-11-28 09:55:54.656650264 +0000 UTC m=+0.080647904 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd) Nov 28 04:55:54 localhost podman[299399]: 2025-11-28 09:55:54.696597613 +0000 UTC m=+0.120595273 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Nov 28 04:55:54 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:55:54 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:55:54 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:55:55 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:55 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain.devices.0}] v 0) Nov 28 04:55:55 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538512.localdomain}] v 0) Nov 28 04:55:55 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:55 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) Nov 28 04:55:55 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:55 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:55 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:55 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:55:56 localhost ceph-mon[287604]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:55:56 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:55:56 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:56 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:56 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:55:56 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Nov 28 04:55:56 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:56 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Nov 28 04:55:56 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:56 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:55:56 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:56 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:55:56 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:56 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) Nov 28 04:55:56 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:55:56 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:56 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:57 localhost ceph-mon[287604]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:55:57 localhost ceph-mon[287604]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:55:57 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:57 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:57 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:57 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:57 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:55:57 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:55:57 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:57 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:55:57 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:57 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0) Nov 28 04:55:57 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:55:57 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:57 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:57 localhost openstack_network_exporter[240973]: ERROR 09:55:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:55:57 localhost openstack_network_exporter[240973]: ERROR 09:55:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:55:57 localhost openstack_network_exporter[240973]: ERROR 09:55:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:55:57 localhost openstack_network_exporter[240973]: ERROR 09:55:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:55:57 localhost openstack_network_exporter[240973]: Nov 28 04:55:57 localhost openstack_network_exporter[240973]: ERROR 09:55:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:55:57 localhost openstack_network_exporter[240973]: Nov 28 04:55:58 localhost ceph-mon[287604]: Added label _no_schedule to host np0005538512.localdomain Nov 28 04:55:58 localhost ceph-mon[287604]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005538512.localdomain Nov 28 04:55:58 localhost ceph-mon[287604]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:55:58 localhost ceph-mon[287604]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:55:58 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:58 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:58 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:55:58 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:55:58 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:58 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:55:58 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:58 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) Nov 28 04:55:58 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:58 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:58 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:55:58 localhost podman[239012]: time="2025-11-28T09:55:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:55:58 localhost podman[239012]: @ - - [28/Nov/2025:09:55:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:55:58 localhost podman[239012]: @ - - [28/Nov/2025:09:55:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19180 "" "Go-http-client/1.1" Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"} v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"} : dispatch Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"}]': finished Nov 28 04:55:59 localhost ceph-mon[287604]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:55:59 localhost ceph-mon[287604]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:55:59 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:59 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:59 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:55:59 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:59 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"} : dispatch Nov 28 04:55:59 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"}]': finished Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "mgr services"} v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mgr services"} : dispatch Nov 28 04:55:59 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:55:59 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:56:00 localhost ceph-mon[287604]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:56:00 localhost ceph-mon[287604]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:56:00 localhost ceph-mon[287604]: Removed host np0005538512.localdomain Nov 28 04:56:00 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:00 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:00 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:00 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:56:00 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:00 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:00 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:56:00 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:00 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) Nov 28 04:56:00 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:00 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) Nov 28 04:56:00 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch Nov 28 04:56:00 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:56:00 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.624 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:56:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:56:01 localhost ceph-mon[287604]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:56:01 localhost ceph-mon[287604]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:56:01 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:01 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:01 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:01 localhost sshd[299419]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:56:01 localhost systemd-logind[763]: New session 69 of user tripleo-admin. Nov 28 04:56:01 localhost systemd[1]: Created slice User Slice of UID 1003. Nov 28 04:56:01 localhost systemd[1]: Starting User Runtime Directory /run/user/1003... Nov 28 04:56:01 localhost systemd[1]: Finished User Runtime Directory /run/user/1003. Nov 28 04:56:01 localhost systemd[1]: Starting User Manager for UID 1003... Nov 28 04:56:01 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:56:01 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:01 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:56:01 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:01 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:56:01 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:56:01 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:56:01 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:01 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:56:01 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:01 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:56:01 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:56:01 localhost systemd[299423]: Queued start job for default target Main User Target. Nov 28 04:56:01 localhost systemd[299423]: Created slice User Application Slice. Nov 28 04:56:01 localhost systemd[299423]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 04:56:01 localhost systemd[299423]: Started Daily Cleanup of User's Temporary Directories. Nov 28 04:56:01 localhost systemd[299423]: Reached target Paths. Nov 28 04:56:01 localhost systemd[299423]: Reached target Timers. Nov 28 04:56:01 localhost systemd[299423]: Starting D-Bus User Message Bus Socket... Nov 28 04:56:01 localhost systemd[299423]: Starting Create User's Volatile Files and Directories... Nov 28 04:56:01 localhost systemd[299423]: Listening on D-Bus User Message Bus Socket. Nov 28 04:56:01 localhost systemd[299423]: Reached target Sockets. Nov 28 04:56:01 localhost systemd[299423]: Finished Create User's Volatile Files and Directories. Nov 28 04:56:01 localhost systemd[299423]: Reached target Basic System. Nov 28 04:56:01 localhost systemd[299423]: Reached target Main User Target. Nov 28 04:56:01 localhost systemd[299423]: Startup finished in 162ms. Nov 28 04:56:01 localhost systemd[1]: Started User Manager for UID 1003. Nov 28 04:56:01 localhost systemd[1]: Started Session 69 of User tripleo-admin. Nov 28 04:56:02 localhost ceph-mon[287604]: Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:56:02 localhost ceph-mon[287604]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:56:02 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:02 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:02 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:02 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:02 localhost python3[299584]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line= - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None Nov 28 04:56:03 localhost python3[299730]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:56:04 localhost python3[299875]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 04:56:04 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:56:05 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:05 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:06 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:07 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) Nov 28 04:56:07 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:07 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:56:07 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:56:07 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:56:07 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:07 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:56:07 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:07 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:56:07 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:56:08 localhost ceph-mon[287604]: Saving service mon spec with placement label:mon Nov 28 04:56:08 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:08 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:08 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:56:09 localhost systemd[1]: tmp-crun.zqJkjG.mount: Deactivated successfully. Nov 28 04:56:09 localhost podman[299912]: 2025-11-28 09:56:09.992369907 +0000 UTC m=+0.096436815 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter) Nov 28 04:56:10 localhost podman[299912]: 2025-11-28 09:56:10.036451113 +0000 UTC m=+0.140518071 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7) Nov 28 04:56:10 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:56:10 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:56:10 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:10 localhost ceph-mon[287604]: mon.np0005538515@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:10 localhost ceph-mon[287604]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:10 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "quorum_status"} v 0) Nov 28 04:56:10 localhost ceph-mon[287604]: log_channel(audit) log [DBG] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "quorum_status"} : dispatch Nov 28 04:56:10 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e13 handle_command mon_command({"prefix": "mon rm", "name": "np0005538515"} v 0) Nov 28 04:56:10 localhost ceph-mon[287604]: log_channel(audit) log [INF] : from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "mon rm", "name": "np0005538515"} : dispatch Nov 28 04:56:10 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5575ffdcef20 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0 Nov 28 04:56:10 localhost ceph-mon[287604]: mon.np0005538515@0(leader) e14 removed from monmap, suicide. Nov 28 04:56:10 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Nov 28 04:56:10 localhost ceph-mgr[286188]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0 Nov 28 04:56:10 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5575ffdcf600 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Nov 28 04:56:10 localhost podman[299947]: 2025-11-28 09:56:10.774332712 +0000 UTC m=+0.055189956 container died a20fbb4af4b220d896878368414f00f458b36bd01f689cea18d6929c00ea38cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, com.redhat.component=rhceph-container, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-type=git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=553) Nov 28 04:56:10 localhost systemd[1]: var-lib-containers-storage-overlay-04c7a1767f0a2009eee63539253f3f9f0fbc337fc72bb39a68aab86969f5acee-merged.mount: Deactivated successfully. Nov 28 04:56:10 localhost podman[299947]: 2025-11-28 09:56:10.81061089 +0000 UTC m=+0.091468104 container remove a20fbb4af4b220d896878368414f00f458b36bd01f689cea18d6929c00ea38cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, distribution-scope=public, name=rhceph, version=7, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, architecture=x86_64, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True) Nov 28 04:56:10 localhost ceph-osd[32393]: --2- [v2:172.18.0.108:6800/2860773178,v1:172.18.0.108:6801/2860773178] >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x55ab8edee400 0x55ab8d898000 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2 Nov 28 04:56:11 localhost systemd[1]: ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1@mon.np0005538515.service: Deactivated successfully. Nov 28 04:56:11 localhost systemd[1]: Stopped Ceph mon.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 04:56:11 localhost systemd[1]: ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1@mon.np0005538515.service: Consumed 12.041s CPU time. Nov 28 04:56:11 localhost systemd[1]: Reloading. Nov 28 04:56:11 localhost systemd-rc-local-generator[300319]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:56:11 localhost systemd-sysv-generator[300324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:11 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:56:18 localhost systemd[1]: tmp-crun.NOdvD3.mount: Deactivated successfully. Nov 28 04:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:56:19 localhost podman[300456]: 2025-11-28 09:56:19.007139766 +0000 UTC m=+0.106860683 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:56:19 localhost podman[300472]: 2025-11-28 09:56:19.080398704 +0000 UTC m=+0.070491204 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 04:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:56:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:56:19 localhost podman[300456]: 2025-11-28 09:56:19.098032862 +0000 UTC m=+0.197753779 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 04:56:19 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:56:19 localhost podman[300472]: 2025-11-28 09:56:19.122484778 +0000 UTC m=+0.112577288 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Nov 28 04:56:19 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:56:19 localhost podman[300497]: 2025-11-28 09:56:19.180819539 +0000 UTC m=+0.072907087 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:56:19 localhost podman[300495]: 2025-11-28 09:56:19.250936409 +0000 UTC m=+0.143250074 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 04:56:19 localhost podman[300495]: 2025-11-28 09:56:19.259393878 +0000 UTC m=+0.151707513 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:56:19 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:56:19 localhost podman[300497]: 2025-11-28 09:56:19.316221763 +0000 UTC m=+0.208309341 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:56:19 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:56:20 localhost nova_compute[280168]: 2025-11-28 09:56:20.785 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:56:22 localhost systemd[1]: tmp-crun.7VGf4I.mount: Deactivated successfully. Nov 28 04:56:22 localhost podman[300560]: 2025-11-28 09:56:22.820497071 +0000 UTC m=+0.089554835 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:56:22 localhost podman[300560]: 2025-11-28 09:56:22.832497528 +0000 UTC m=+0.101555262 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:56:22 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:56:23 localhost podman[300615]: Nov 28 04:56:23 localhost podman[300615]: 2025-11-28 09:56:23.276947877 +0000 UTC m=+0.081620993 container create e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_kirch, GIT_BRANCH=main, distribution-scope=public, version=7, architecture=x86_64, RELEASE=main, description=Red Hat Ceph Storage 7, release=553, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:56:23 localhost systemd[1]: Started libpod-conmon-e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd.scope. Nov 28 04:56:23 localhost podman[300615]: 2025-11-28 09:56:23.24527117 +0000 UTC m=+0.049944326 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:23 localhost systemd[1]: Started libcrun container. Nov 28 04:56:23 localhost podman[300615]: 2025-11-28 09:56:23.364625464 +0000 UTC m=+0.169298590 container init e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_kirch, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, version=7, architecture=x86_64, RELEASE=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.expose-services=, release=553, vendor=Red Hat, Inc., GIT_CLEAN=True) Nov 28 04:56:23 localhost podman[300615]: 2025-11-28 09:56:23.376547438 +0000 UTC m=+0.181220564 container start e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_kirch, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux , GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, ceph=True, build-date=2025-09-24T08:57:55, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:56:23 localhost podman[300615]: 2025-11-28 09:56:23.377257169 +0000 UTC m=+0.181930285 container attach e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_kirch, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=) Nov 28 04:56:23 localhost romantic_kirch[300630]: 167 167 Nov 28 04:56:23 localhost systemd[1]: libpod-e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd.scope: Deactivated successfully. Nov 28 04:56:23 localhost podman[300615]: 2025-11-28 09:56:23.381893141 +0000 UTC m=+0.186566277 container died e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_kirch, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, io.openshift.expose-services=, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, maintainer=Guillaume Abrioux , release=553, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=) Nov 28 04:56:23 localhost podman[300635]: 2025-11-28 09:56:23.48537212 +0000 UTC m=+0.088193184 container remove e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_kirch, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, io.openshift.expose-services=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, release=553, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=) Nov 28 04:56:23 localhost systemd[1]: libpod-conmon-e42102336fc1acf5bf48ef4e3d2fcf4221a4f50af373c5c14c0bc253f90e4ccd.scope: Deactivated successfully. Nov 28 04:56:23 localhost systemd[1]: var-lib-containers-storage-overlay-23505fcdc6081ff9353caafebf0151945c6c8b473f91d1a30963d8e96ed8a8ff-merged.mount: Deactivated successfully. Nov 28 04:56:24 localhost podman[300753]: Nov 28 04:56:24 localhost podman[300753]: 2025-11-28 09:56:24.197556073 +0000 UTC m=+0.075506326 container create eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_moser, build-date=2025-09-24T08:57:55, ceph=True, version=7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph) Nov 28 04:56:24 localhost systemd[1]: Started libpod-conmon-eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea.scope. Nov 28 04:56:24 localhost systemd[1]: Started libcrun container. Nov 28 04:56:24 localhost podman[300753]: 2025-11-28 09:56:24.251646726 +0000 UTC m=+0.129596979 container init eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_moser, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:56:24 localhost podman[300753]: 2025-11-28 09:56:24.2596749 +0000 UTC m=+0.137625133 container start eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_moser, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, architecture=x86_64, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, release=553, description=Red Hat Ceph Storage 7) Nov 28 04:56:24 localhost podman[300753]: 2025-11-28 09:56:24.260043272 +0000 UTC m=+0.137993595 container attach eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_moser, release=553, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Nov 28 04:56:24 localhost vigilant_moser[300768]: 167 167 Nov 28 04:56:24 localhost systemd[1]: libpod-eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea.scope: Deactivated successfully. Nov 28 04:56:24 localhost podman[300753]: 2025-11-28 09:56:24.263409735 +0000 UTC m=+0.141360098 container died eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_moser, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, release=553, GIT_BRANCH=main, name=rhceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, vendor=Red Hat, Inc.) Nov 28 04:56:24 localhost podman[300753]: 2025-11-28 09:56:24.166425983 +0000 UTC m=+0.044376366 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:24 localhost podman[300773]: 2025-11-28 09:56:24.333972889 +0000 UTC m=+0.067023488 container remove eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_moser, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux , io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, GIT_BRANCH=main, vcs-type=git, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True) Nov 28 04:56:24 localhost systemd[1]: libpod-conmon-eb3b79bfca3fe0dd2f5617abe87251038f6b3fc75f15452003b4a70bfb9d1fea.scope: Deactivated successfully. Nov 28 04:56:24 localhost podman[300862]: Nov 28 04:56:24 localhost podman[300862]: 2025-11-28 09:56:24.795989935 +0000 UTC m=+0.077744505 container create e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_williams, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, RELEASE=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, maintainer=Guillaume Abrioux ) Nov 28 04:56:24 localhost systemd[1]: tmp-crun.ppFrcD.mount: Deactivated successfully. Nov 28 04:56:24 localhost systemd[1]: var-lib-containers-storage-overlay-7c93fe0f4b18fc1eb31bd126ceaf6e89675e8948f2a834fc31751de85703b260-merged.mount: Deactivated successfully. Nov 28 04:56:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:56:24 localhost systemd[1]: Started libpod-conmon-e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1.scope. Nov 28 04:56:24 localhost podman[300862]: 2025-11-28 09:56:24.763983037 +0000 UTC m=+0.045737597 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:24 localhost systemd[1]: Started libcrun container. Nov 28 04:56:24 localhost podman[300862]: 2025-11-28 09:56:24.887790658 +0000 UTC m=+0.169545218 container init e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_williams, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux , release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.33.12) Nov 28 04:56:24 localhost naughty_williams[300883]: 167 167 Nov 28 04:56:24 localhost systemd[1]: libpod-e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1.scope: Deactivated successfully. Nov 28 04:56:24 localhost podman[300862]: 2025-11-28 09:56:24.902115395 +0000 UTC m=+0.183869965 container start e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_williams, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, release=553) Nov 28 04:56:24 localhost podman[300862]: 2025-11-28 09:56:24.903112515 +0000 UTC m=+0.184867125 container attach e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_williams, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, name=rhceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7) Nov 28 04:56:24 localhost podman[300862]: 2025-11-28 09:56:24.905860589 +0000 UTC m=+0.187615179 container died e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_williams, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, vcs-type=git, architecture=x86_64) Nov 28 04:56:24 localhost podman[300877]: 2025-11-28 09:56:24.984751007 +0000 UTC m=+0.139618833 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 04:56:25 localhost podman[300892]: 2025-11-28 09:56:25.042498141 +0000 UTC m=+0.130882227 container remove e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_williams, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.12, vcs-type=git, build-date=2025-09-24T08:57:55, ceph=True, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:56:25 localhost systemd[1]: libpod-conmon-e3833cc75ac3e5cdbca0624e2fd30fa9a165398b3d430dfb3ab1f260ea7241e1.scope: Deactivated successfully. Nov 28 04:56:25 localhost podman[300877]: 2025-11-28 09:56:25.073800746 +0000 UTC m=+0.228668572 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:56:25 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:56:25 localhost podman[300928]: Nov 28 04:56:25 localhost podman[300928]: 2025-11-28 09:56:25.149692853 +0000 UTC m=+0.067124690 container create 27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_nobel, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux , release=553, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, ceph=True) Nov 28 04:56:25 localhost systemd[1]: Started libpod-conmon-27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e.scope. Nov 28 04:56:25 localhost systemd[1]: Started libcrun container. Nov 28 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f7d916c15565b1c0e2f8063d0e3902e0ff4155098303eb460f7197bc598bec/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f7d916c15565b1c0e2f8063d0e3902e0ff4155098303eb460f7197bc598bec/merged/tmp/config supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f7d916c15565b1c0e2f8063d0e3902e0ff4155098303eb460f7197bc598bec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:25 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f7d916c15565b1c0e2f8063d0e3902e0ff4155098303eb460f7197bc598bec/merged/var/lib/ceph/mon/ceph-np0005538515 supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:25 localhost podman[300928]: 2025-11-28 09:56:25.206909721 +0000 UTC m=+0.124341548 container init 27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_nobel, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True) Nov 28 04:56:25 localhost podman[300928]: 2025-11-28 09:56:25.215620487 +0000 UTC m=+0.133052334 container start 27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_nobel, com.redhat.license_terms=https://www.redhat.com/agreements, release=553, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , version=7, distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc.) Nov 28 04:56:25 localhost podman[300928]: 2025-11-28 09:56:25.216093611 +0000 UTC m=+0.133525448 container attach 27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_nobel, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , RELEASE=main, release=553, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True) Nov 28 04:56:25 localhost podman[300928]: 2025-11-28 09:56:25.125599267 +0000 UTC m=+0.043031144 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:25 localhost systemd[1]: libpod-27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e.scope: Deactivated successfully. Nov 28 04:56:25 localhost podman[300928]: 2025-11-28 09:56:25.311976058 +0000 UTC m=+0.229408175 container died 27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_nobel, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, RELEASE=main, release=553, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:56:25 localhost podman[300972]: 2025-11-28 09:56:25.40670705 +0000 UTC m=+0.084119909 container remove 27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_nobel, CEPH_POINT_RELEASE=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.33.12, ceph=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=) Nov 28 04:56:25 localhost systemd[1]: libpod-conmon-27c18f369328566a4b4030b15f216752f36c58ddfce2edccbef4823adad7530e.scope: Deactivated successfully. Nov 28 04:56:25 localhost systemd[1]: Reloading. Nov 28 04:56:25 localhost systemd-rc-local-generator[301009]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:56:25 localhost systemd-sysv-generator[301016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:25 localhost systemd[1]: var-lib-containers-storage-overlay-15aa363c80d8a8a19ee62ad5a41cdb499331b3bc5f00fd34ba725e34fa5cee77-merged.mount: Deactivated successfully. Nov 28 04:56:25 localhost systemd[1]: Reloading. Nov 28 04:56:25 localhost systemd-rc-local-generator[301052]: /etc/rc.d/rc.local is not marked executable, skipping. Nov 28 04:56:25 localhost systemd-sysv-generator[301058]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust. Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload Nov 28 04:56:26 localhost systemd[1]: Starting Ceph mon.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1... Nov 28 04:56:26 localhost podman[301116]: Nov 28 04:56:26 localhost podman[301116]: 2025-11-28 09:56:26.560366013 +0000 UTC m=+0.069132192 container create 9d25083944a0821d6aa6b270a71605ed33c6aecefc4f4532b654f92a63e98682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux , vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, name=rhceph, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, version=7) Nov 28 04:56:26 localhost systemd[1]: tmp-crun.KHG8rT.mount: Deactivated successfully. Nov 28 04:56:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e10033ff9163dac6e3ff16d3e55af3f313132156d2843a6fbb3e3458282cfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e10033ff9163dac6e3ff16d3e55af3f313132156d2843a6fbb3e3458282cfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e10033ff9163dac6e3ff16d3e55af3f313132156d2843a6fbb3e3458282cfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e10033ff9163dac6e3ff16d3e55af3f313132156d2843a6fbb3e3458282cfc/merged/var/lib/ceph/mon/ceph-np0005538515 supports timestamps until 2038 (0x7fffffff) Nov 28 04:56:26 localhost podman[301116]: 2025-11-28 09:56:26.618968382 +0000 UTC m=+0.127734591 container init 9d25083944a0821d6aa6b270a71605ed33c6aecefc4f4532b654f92a63e98682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, io.buildah.version=1.33.12, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:56:26 localhost podman[301116]: 2025-11-28 09:56:26.630993629 +0000 UTC m=+0.139759848 container start 9d25083944a0821d6aa6b270a71605ed33c6aecefc4f4532b654f92a63e98682 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mon-np0005538515, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.33.12, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=553, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:56:26 localhost podman[301116]: 2025-11-28 09:56:26.534177993 +0000 UTC m=+0.042944262 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:26 localhost bash[301116]: 9d25083944a0821d6aa6b270a71605ed33c6aecefc4f4532b654f92a63e98682 Nov 28 04:56:26 localhost systemd[1]: Started Ceph mon.np0005538515 for 2c5417c9-00eb-57d5-a565-ddecbc7995c1. Nov 28 04:56:26 localhost ceph-mon[301134]: set uid:gid to 167:167 (ceph:ceph) Nov 28 04:56:26 localhost ceph-mon[301134]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2 Nov 28 04:56:26 localhost ceph-mon[301134]: pidfile_write: ignore empty --pid-file Nov 28 04:56:26 localhost ceph-mon[301134]: load: jerasure load: lrc Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: RocksDB version: 7.9.2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Git sha 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Compile date 2025-09-23 00:00:00 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: DB SUMMARY Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: DB Session ID: 7KM5GJAJPD54H6HSLJHG Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: CURRENT file: CURRENT Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: IDENTITY file: IDENTITY Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005538515/store.db dir, Total Num: 0, files: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005538515/store.db: 000004.log size: 636 ; Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.error_if_exists: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.create_if_missing: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.paranoid_checks: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.flush_verify_memtable_count: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.track_and_verify_wals_in_manifest: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.verify_sst_unique_id_in_manifest: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.env: 0x561ade3479e0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.fs: PosixFileSystem Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.info_log: 0x561ae070ad20 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_file_opening_threads: 16 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.statistics: (nil) Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.use_fsync: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_log_file_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_manifest_file_size: 1073741824 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.log_file_time_to_roll: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.keep_log_file_num: 1000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.recycle_log_file_num: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.allow_fallocate: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.allow_mmap_reads: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.allow_mmap_writes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.use_direct_reads: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.create_missing_column_families: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.db_log_dir: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.wal_dir: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.table_cache_numshardbits: 6 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.WAL_ttl_seconds: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.WAL_size_limit_MB: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_write_batch_group_size_bytes: 1048576 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.manifest_preallocation_size: 4194304 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.is_fd_close_on_exec: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.advise_random_on_open: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.db_write_buffer_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.write_buffer_manager: 0x561ae071b540 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.access_hint_on_compaction_start: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.random_access_max_buffer_size: 1048576 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.use_adaptive_mutex: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.rate_limiter: (nil) Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.wal_recovery_mode: 2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.enable_thread_tracking: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.enable_pipelined_write: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.unordered_write: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.allow_concurrent_memtable_write: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.enable_write_thread_adaptive_yield: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.write_thread_max_yield_usec: 100 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.write_thread_slow_yield_usec: 3 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.row_cache: None Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.wal_filter: None Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.avoid_flush_during_recovery: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.allow_ingest_behind: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.two_write_queues: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.manual_wal_flush: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.wal_compression: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.atomic_flush: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.avoid_unnecessary_blocking_io: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.persist_stats_to_disk: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.write_dbid_to_manifest: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.log_readahead_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.file_checksum_gen_factory: Unknown Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.best_efforts_recovery: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bgerror_resume_count: 2147483647 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bgerror_resume_retry_interval: 1000000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.allow_data_in_errors: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.db_host_id: __hostname__ Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.enforce_single_del_contracts: true Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_background_jobs: 2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_background_compactions: -1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_subcompactions: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.avoid_flush_during_shutdown: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.writable_file_max_buffer_size: 1048576 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.delayed_write_rate : 16777216 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_total_wal_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.stats_dump_period_sec: 600 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.stats_persist_period_sec: 600 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.stats_history_buffer_size: 1048576 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_open_files: -1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bytes_per_sync: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.wal_bytes_per_sync: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.strict_bytes_per_sync: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_readahead_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_background_flushes: -1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Compression algorithms supported: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kZSTD supported: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kXpressCompression supported: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kBZip2Compression supported: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kZSTDNotFinalCompression supported: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kLZ4Compression supported: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kZlibCompression supported: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kLZ4HCCompression supported: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: #011kSnappyCompression supported: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Fast CRC32 supported: Supported on x86 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: DMutex implementation: pthread_mutex_t Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005538515/store.db/MANIFEST-000005 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.comparator: leveldb.BytewiseComparator Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.merge_operator: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_filter: None Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_filter_factory: None Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.sst_partitioner_factory: None Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.memtable_factory: SkipListFactory Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.table_factory: BlockBasedTable Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561ae070a980)#012 cache_index_and_filter_blocks: 1#012 cache_index_and_filter_blocks_with_high_priority: 0#012 pin_l0_filter_and_index_blocks_in_cache: 0#012 pin_top_level_index_and_filter: 1#012 index_type: 0#012 data_block_index_type: 0#012 index_shortening: 1#012 data_block_hash_table_util_ratio: 0.750000#012 checksum: 4#012 no_block_cache: 0#012 block_cache: 0x561ae0707350#012 block_cache_name: BinnedLRUCache#012 block_cache_options:#012 capacity : 536870912#012 num_shard_bits : 4#012 strict_capacity_limit : 0#012 high_pri_pool_ratio: 0.000#012 block_cache_compressed: (nil)#012 persistent_cache: (nil)#012 block_size: 4096#012 block_size_deviation: 10#012 block_restart_interval: 16#012 index_block_restart_interval: 1#012 metadata_block_size: 4096#012 partition_filters: 0#012 use_delta_encoding: 1#012 filter_policy: bloomfilter#012 whole_key_filtering: 1#012 verify_compression: 0#012 read_amp_bytes_per_bit: 0#012 format_version: 5#012 enable_index_compression: 1#012 block_align: 0#012 max_auto_readahead_size: 262144#012 prepopulate_block_cache: 0#012 initial_auto_readahead_size: 8192#012 num_file_reads_for_auto_readahead: 2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.write_buffer_size: 33554432 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_write_buffer_number: 2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression: NoCompression Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression: Disabled Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.prefix_extractor: nullptr Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.num_levels: 7 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.min_write_buffer_number_to_merge: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_write_buffer_number_to_maintain: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_write_buffer_size_to_maintain: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.window_bits: -14 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.level: 32767 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.strategy: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.enabled: false Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.window_bits: -14 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.level: 32767 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.strategy: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.max_dict_bytes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.use_zstd_dict_trainer: true Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.parallel_threads: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.enabled: false Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.level0_file_num_compaction_trigger: 4 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.level0_slowdown_writes_trigger: 20 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.level0_stop_writes_trigger: 36 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.target_file_size_base: 67108864 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.target_file_size_multiplier: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_base: 268435456 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_sequential_skip_in_iterations: 8 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_compaction_bytes: 1677721600 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.ignore_max_compaction_bytes_for_input: true Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.arena_block_size: 1048576 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.disable_auto_compactions: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_style: kCompactionStyleLevel Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_pri: kMinOverlappingRatio Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_universal.size_ratio: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_universal.min_merge_width: 2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.table_properties_collectors: Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.inplace_update_support: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.inplace_update_num_locks: 10000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.memtable_whole_key_filtering: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.memtable_huge_page_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.bloom_locality: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.max_successive_merges: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.optimize_filters_for_hits: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.paranoid_file_checks: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.force_consistency_checks: 1 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.report_bg_io_stats: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.ttl: 2592000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.periodic_compaction_seconds: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.preclude_last_level_data_seconds: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.preserve_internal_time_seconds: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.enable_blob_files: false Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.min_blob_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.blob_file_size: 268435456 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.blob_compression_type: NoCompression Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.enable_blob_garbage_collection: false Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.blob_compaction_readahead_size: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.blob_file_starting_level: 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: Options.experimental_mempurge_threshold: 0.000000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005538515/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 75e61b0e-4f73-4b03-b096-8587ecbe7a9f Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323786679350, "job": 1, "event": "recovery_started", "wal_files": [4]} Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323786681587, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323786681701, "job": 1, "event": "recovery_finished"} Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:5047] Creating manifest 10 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x561ae072ee00 Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: DB pointer 0x561ae0824000 Nov 28 04:56:26 localhost ceph-mon[301134]: mon.np0005538515 does not exist in monmap, will attempt to join an existing cluster Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:56:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 1/0 1.72 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Sum 1/0 1.72 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561ae0707350#2 capacity: 512.00 MB usage: 0.98 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.77 KB,0.000146031%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 28 04:56:26 localhost ceph-mon[301134]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] Nov 28 04:56:26 localhost ceph-mon[301134]: starting mon.np0005538515 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005538515 fsid 2c5417c9-00eb-57d5-a565-ddecbc7995c1 Nov 28 04:56:26 localhost ceph-mon[301134]: mon.np0005538515@-1(???) e0 preinit fsid 2c5417c9-00eb-57d5-a565-ddecbc7995c1 Nov 28 04:56:26 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing) e14 sync_obtain_latest_monmap Nov 28 04:56:26 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing) e14 sync_obtain_latest_monmap obtained monmap e14 Nov 28 04:56:26 localhost podman[301177]: Nov 28 04:56:26 localhost podman[301177]: 2025-11-28 09:56:26.797849223 +0000 UTC m=+0.068827672 container create 7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_panini, architecture=x86_64, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux , GIT_BRANCH=main, release=553, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=) Nov 28 04:56:26 localhost systemd[1]: tmp-crun.03VsBs.mount: Deactivated successfully. Nov 28 04:56:26 localhost systemd[1]: Started libpod-conmon-7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8.scope. Nov 28 04:56:26 localhost systemd[1]: Started libcrun container. Nov 28 04:56:26 localhost podman[301177]: 2025-11-28 09:56:26.764049651 +0000 UTC m=+0.035028130 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:26 localhost podman[301177]: 2025-11-28 09:56:26.870550452 +0000 UTC m=+0.141528901 container init 7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_panini, io.buildah.version=1.33.12, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, release=553, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7) Nov 28 04:56:26 localhost podman[301177]: 2025-11-28 09:56:26.880453985 +0000 UTC m=+0.151432444 container start 7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_panini, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, version=7, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, RELEASE=main, build-date=2025-09-24T08:57:55, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, vendor=Red Hat, Inc.) Nov 28 04:56:26 localhost podman[301177]: 2025-11-28 09:56:26.881016372 +0000 UTC m=+0.151994831 container attach 7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_panini, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, version=7, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux , distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=) Nov 28 04:56:26 localhost quizzical_panini[301192]: 167 167 Nov 28 04:56:26 localhost systemd[1]: libpod-7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8.scope: Deactivated successfully. Nov 28 04:56:26 localhost podman[301177]: 2025-11-28 09:56:26.886630614 +0000 UTC m=+0.157609083 container died 7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_panini, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, vcs-type=git, release=553, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, build-date=2025-09-24T08:57:55) Nov 28 04:56:26 localhost podman[301197]: 2025-11-28 09:56:26.990732282 +0000 UTC m=+0.091199215 container remove 7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quizzical_panini, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=553, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., version=7, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, maintainer=Guillaume Abrioux , name=rhceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12) Nov 28 04:56:26 localhost systemd[1]: libpod-conmon-7294beee033c4cb7de008fb12a21fe87cf238a20fe3c578110b2ae5c579af9e8.scope: Deactivated successfully. Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).mds e17 new map Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).mds e17 print_map#012e17#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#01115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-28T08:07:30.958224+0000#012modified#0112025-11-28T09:49:53.259185+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#01183#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=26449}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[6]#012metadata_pool#0117#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 26449 members: 26449#012[mds.mds.np0005538514.umgtoy{0:26449} state up:active seq 12 addr [v2:172.18.0.107:6808/1969410151,v1:172.18.0.107:6809/1969410151] compat {c=[1],r=[1],i=[17ff]}]#012 #012 #012Standby daemons:#012 #012[mds.mds.np0005538513.yljthc{-1:16968} state up:standby seq 1 addr [v2:172.18.0.106:6808/2782735008,v1:172.18.0.106:6809/2782735008] compat {c=[1],r=[1],i=[17ff]}]#012[mds.mds.np0005538515.anvatb{-1:26446} state up:standby seq 1 addr [v2:172.18.0.108:6808/2640180,v1:172.18.0.108:6809/2640180] compat {c=[1],r=[1],i=[17ff]}] Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).osd e90 crush map has features 3314933000852226048, adjusting msgr requires Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).osd e90 crush map has features 288514051259236352, adjusting msgr requires Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).osd e90 crush map has features 288514051259236352, adjusting msgr requires Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).osd e90 crush map has features 288514051259236352, adjusting msgr requires Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Removed label mon from host np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: Removed label mgr from host np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Removed label _admin from host np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Removing np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Removing np0005538512.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:56:27 localhost ceph-mon[301134]: Removing np0005538512.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Removing daemon mgr.np0005538512.zyhkxs from np0005538512.localdomain -- ports [8765] Nov 28 04:56:27 localhost ceph-mon[301134]: Removing key for mgr.np0005538512.zyhkxs Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005538512.zyhkxs"}]': finished Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538512.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538512 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538512 on np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Added label _no_schedule to host np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain"}]': finished Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: Removed host np0005538512.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Saving service mon spec with placement label:mon Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538513 calling monitor election Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538514 calling monitor election Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538513 is new leader, mons np0005538513,np0005538514 in quorum (ranks 0,1) Nov 28 04:56:27 localhost ceph-mon[301134]: overall HEALTH_OK Nov 28 04:56:27 localhost ceph-mon[301134]: Remove daemons mon.np0005538515 Nov 28 04:56:27 localhost ceph-mon[301134]: Safe to remove mon.np0005538515: new quorum should be ['np0005538513', 'np0005538514'] (from ['np0005538513', 'np0005538514']) Nov 28 04:56:27 localhost ceph-mon[301134]: Removing monitor np0005538515 from monmap... Nov 28 04:56:27 localhost ceph-mon[301134]: Removing daemon mon.np0005538515 from np0005538515.localdomain -- ports [] Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Deploying daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:56:27 localhost ceph-mon[301134]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:27 localhost ceph-mon[301134]: mon.np0005538515@-1(synchronizing).paxosservice(auth 1..40) refresh upgraded, format 0 -> 3 Nov 28 04:56:27 localhost ceph-mgr[286188]: ms_deliver_dispatch: unhandled message 0x5575ffdcf1e0 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0 Nov 28 04:56:27 localhost openstack_network_exporter[240973]: ERROR 09:56:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:56:27 localhost openstack_network_exporter[240973]: ERROR 09:56:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:56:27 localhost openstack_network_exporter[240973]: ERROR 09:56:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:56:27 localhost openstack_network_exporter[240973]: ERROR 09:56:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:56:27 localhost openstack_network_exporter[240973]: Nov 28 04:56:27 localhost openstack_network_exporter[240973]: ERROR 09:56:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:56:27 localhost openstack_network_exporter[240973]: Nov 28 04:56:27 localhost podman[301273]: Nov 28 04:56:27 localhost podman[301273]: 2025-11-28 09:56:27.799844365 +0000 UTC m=+0.074192006 container create 3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux , name=rhceph, GIT_BRANCH=main, version=7, RELEASE=main, CEPH_POINT_RELEASE=, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:56:27 localhost systemd[1]: var-lib-containers-storage-overlay-c1ee95d8c81fd8a8b2a249cffc03d35606a4a336f30e2b686796f40cfeb36a7e-merged.mount: Deactivated successfully. Nov 28 04:56:27 localhost systemd[1]: Started libpod-conmon-3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365.scope. Nov 28 04:56:27 localhost systemd[1]: Started libcrun container. Nov 28 04:56:27 localhost podman[301273]: 2025-11-28 09:56:27.76989051 +0000 UTC m=+0.044238181 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:27 localhost podman[301273]: 2025-11-28 09:56:27.876199736 +0000 UTC m=+0.150547377 container init 3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, version=7, name=rhceph, build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, distribution-scope=public, release=553, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.expose-services=) Nov 28 04:56:27 localhost podman[301273]: 2025-11-28 09:56:27.886333025 +0000 UTC m=+0.160680666 container start 3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:56:27 localhost podman[301273]: 2025-11-28 09:56:27.886585783 +0000 UTC m=+0.160933424 container attach 3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, name=rhceph, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, architecture=x86_64, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux , ceph=True, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55) Nov 28 04:56:27 localhost confident_feistel[301288]: 167 167 Nov 28 04:56:27 localhost systemd[1]: libpod-3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365.scope: Deactivated successfully. Nov 28 04:56:27 localhost podman[301273]: 2025-11-28 09:56:27.888570654 +0000 UTC m=+0.162918315 container died 3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, release=553, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container) Nov 28 04:56:27 localhost podman[301293]: 2025-11-28 09:56:27.981627725 +0000 UTC m=+0.085022677 container remove 3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=confident_feistel, GIT_BRANCH=main, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, vcs-type=git, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=553, io.buildah.version=1.33.12, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:56:27 localhost systemd[1]: libpod-conmon-3cd7c2118b50f354408439c87ae622519415375b69afe69cf94fa1f69712b365.scope: Deactivated successfully. Nov 28 04:56:28 localhost systemd[1]: var-lib-containers-storage-overlay-a47b285070e973d1757d9cbe6d6f4eb6a282d7fe9e56692c0c44e8c104d648fa-merged.mount: Deactivated successfully. Nov 28 04:56:28 localhost podman[239012]: time="2025-11-28T09:56:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:56:28 localhost podman[239012]: @ - - [28/Nov/2025:09:56:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:56:28 localhost podman[239012]: @ - - [28/Nov/2025:09:56:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19180 "" "Go-http-client/1.1" Nov 28 04:56:29 localhost ceph-mon[301134]: mon.np0005538515@-1(probing) e15 my rank is now 2 (was -1) Nov 28 04:56:29 localhost ceph-mon[301134]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:56:29 localhost ceph-mon[301134]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Nov 28 04:56:29 localhost ceph-mon[301134]: mon.np0005538515@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:56:30 localhost nova_compute[280168]: 2025-11-28 09:56:30.264 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:30 localhost nova_compute[280168]: 2025-11-28 09:56:30.264 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:32 localhost nova_compute[280168]: 2025-11-28 09:56:32.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:32 localhost nova_compute[280168]: 2025-11-28 09:56:32.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:56:32 localhost podman[301362]: Nov 28 04:56:32 localhost podman[301362]: 2025-11-28 09:56:32.925560988 +0000 UTC m=+0.076806297 container create 2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_northcutt, GIT_BRANCH=main, io.buildah.version=1.33.12, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, vcs-type=git, GIT_CLEAN=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., release=553) Nov 28 04:56:32 localhost systemd[1]: Started libpod-conmon-2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f.scope. Nov 28 04:56:32 localhost systemd[1]: Started libcrun container. Nov 28 04:56:32 localhost podman[301362]: 2025-11-28 09:56:32.992542073 +0000 UTC m=+0.143787342 container init 2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_northcutt, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, distribution-scope=public, release=553, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, build-date=2025-09-24T08:57:55) Nov 28 04:56:32 localhost podman[301362]: 2025-11-28 09:56:32.893516729 +0000 UTC m=+0.044762028 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:33 localhost podman[301362]: 2025-11-28 09:56:33.003044043 +0000 UTC m=+0.154289352 container start 2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_northcutt, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, vcs-type=git, build-date=2025-09-24T08:57:55, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, release=553, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, name=rhceph, distribution-scope=public, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:56:33 localhost podman[301362]: 2025-11-28 09:56:33.003338243 +0000 UTC m=+0.154583522 container attach 2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_northcutt, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, architecture=x86_64, build-date=2025-09-24T08:57:55, name=rhceph, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True) Nov 28 04:56:33 localhost relaxed_northcutt[301378]: 167 167 Nov 28 04:56:33 localhost systemd[1]: libpod-2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f.scope: Deactivated successfully. Nov 28 04:56:33 localhost podman[301362]: 2025-11-28 09:56:33.005738885 +0000 UTC m=+0.156984164 container died 2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_northcutt, maintainer=Guillaume Abrioux , name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:56:33 localhost podman[301383]: 2025-11-28 09:56:33.095043642 +0000 UTC m=+0.077657171 container remove 2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_northcutt, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, version=7, io.openshift.expose-services=, ceph=True, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=553, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:56:33 localhost systemd[1]: libpod-conmon-2c537f4269efc82ea4c3a807e32acc951613aa6563df4a9269cc33ad9240dd0f.scope: Deactivated successfully. Nov 28 04:56:33 localhost nova_compute[280168]: 2025-11-28 09:56:33.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:33 localhost nova_compute[280168]: 2025-11-28 09:56:33.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:56:33 localhost nova_compute[280168]: 2025-11-28 09:56:33.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:56:33 localhost nova_compute[280168]: 2025-11-28 09:56:33.373 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:56:33 localhost systemd[1]: var-lib-containers-storage-overlay-ccae6c316a7e0b0fbd5cc83757b41b105132aff1424e7216c8dc088cbe0ee340-merged.mount: Deactivated successfully. Nov 28 04:56:34 localhost podman[301507]: 2025-11-28 09:56:34.142649696 +0000 UTC m=+0.082933582 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-09-24T08:57:55, ceph=True, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Nov 28 04:56:34 localhost ceph-mon[301134]: log_channel(cluster) log [INF] : mon.np0005538515 calling monitor election Nov 28 04:56:34 localhost ceph-mon[301134]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515@2(electing) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code} Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout} Nov 28 04:56:34 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:56:34 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538513 calling monitor election Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538514 calling monitor election Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538513 is new leader, mons np0005538513,np0005538514 in quorum (ranks 0,1) Nov 28 04:56:34 localhost ceph-mon[301134]: Health check failed: 1/3 mons down, quorum np0005538513,np0005538514 (MON_DOWN) Nov 28 04:56:34 localhost ceph-mon[301134]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005538513,np0005538514 Nov 28 04:56:34 localhost ceph-mon[301134]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005538513,np0005538514 Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515 (rank 2) addr [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] is down (out of quorum) Nov 28 04:56:34 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:34 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:34 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538515.yfkzhl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:34 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538515.yfkzhl (monmap changed)... Nov 28 04:56:34 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538515.yfkzhl on np0005538515.localdomain Nov 28 04:56:34 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:34 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:34 localhost podman[301507]: 2025-11-28 09:56:34.273838122 +0000 UTC m=+0.214122008 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, release=553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, distribution-scope=public, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:56:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 collect_metadata vda: no unique device id for vda: fallback method has no model nor serial Nov 28 04:56:34 localhost ceph-mon[301134]: mgrc update_daemon_metadata mon.np0005538515 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005538515.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.6 (Plow),distro_version=9.6,hostname=np0005538515.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux} Nov 28 04:56:35 localhost ceph-mon[301134]: mon.np0005538515 calling monitor election Nov 28 04:56:35 localhost ceph-mon[301134]: mon.np0005538513 calling monitor election Nov 28 04:56:35 localhost ceph-mon[301134]: mon.np0005538515 calling monitor election Nov 28 04:56:35 localhost ceph-mon[301134]: mon.np0005538513 is new leader, mons np0005538513,np0005538514,np0005538515 in quorum (ranks 0,1,2) Nov 28 04:56:35 localhost ceph-mon[301134]: mon.np0005538514 calling monitor election Nov 28 04:56:35 localhost ceph-mon[301134]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005538513,np0005538514) Nov 28 04:56:35 localhost ceph-mon[301134]: Cluster is now healthy Nov 28 04:56:35 localhost ceph-mon[301134]: overall HEALTH_OK Nov 28 04:56:35 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:35 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:35 localhost nova_compute[280168]: 2025-11-28 09:56:35.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:36 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:37 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:37 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:37 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.445 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.445 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.446 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.446 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.446 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:56:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:56:37 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3870773326' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:56:37 localhost nova_compute[280168]: 2025-11-28 09:56:37.902 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.105 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.107 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12016MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.108 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.108 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.183 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.184 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.214 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:56:38 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:38 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:38 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:38 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538513.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:56:38 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3004239366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.670 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.676 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.700 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.703 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:56:38 localhost nova_compute[280168]: 2025-11-28 09:56:38.703 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:56:39 localhost ceph-mon[301134]: Reconfiguring crash.np0005538513 (monmap changed)... Nov 28 04:56:39 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538513 on np0005538513.localdomain Nov 28 04:56:39 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:39 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:39 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:56:39 localhost nova_compute[280168]: 2025-11-28 09:56:39.700 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:40 localhost nova_compute[280168]: 2025-11-28 09:56:40.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:56:40 localhost ceph-mon[301134]: Reconfiguring osd.2 (monmap changed)... Nov 28 04:56:40 localhost ceph-mon[301134]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:56:40 localhost podman[302077]: 2025-11-28 09:56:40.981244405 +0000 UTC m=+0.077190767 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 04:56:40 localhost podman[302077]: 2025-11-28 09:56:40.995059887 +0000 UTC m=+0.091006279 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7) Nov 28 04:56:41 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:56:41 localhost ceph-mon[301134]: Reconfiguring osd.5 (monmap changed)... Nov 28 04:56:41 localhost ceph-mon[301134]: Reconfiguring daemon osd.5 on np0005538513.localdomain Nov 28 04:56:41 localhost ceph-mon[301134]: Reconfig service osd.default_drive_group Nov 28 04:56:41 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:41 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:41 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:41 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:41 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:41 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e90 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 Nov 28 04:56:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e90 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 Nov 28 04:56:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 e91: 6 total, 6 up, 6 in Nov 28 04:56:41 localhost systemd[1]: session-67.scope: Deactivated successfully. Nov 28 04:56:41 localhost systemd[1]: session-67.scope: Consumed 23.524s CPU time. Nov 28 04:56:41 localhost systemd-logind[763]: Session 67 logged out. Waiting for processes to exit. Nov 28 04:56:41 localhost systemd-logind[763]: Removed session 67. Nov 28 04:56:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1019390646 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:41 localhost sshd[302096]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:56:41 localhost systemd-logind[763]: New session 71 of user ceph-admin. Nov 28 04:56:41 localhost systemd[1]: Started Session 71 of User ceph-admin. Nov 28 04:56:42 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:56:42 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:56:42 localhost ceph-mon[301134]: from='client.? 172.18.0.200:0/937537164' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: Activating manager daemon np0005538514.djozup Nov 28 04:56:42 localhost ceph-mon[301134]: from='client.? 172.18.0.200:0/937537164' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.26581 172.18.0.106:0/4290692976' entity='mgr.np0005538513.dsfdlx' Nov 28 04:56:42 localhost ceph-mon[301134]: Manager daemon np0005538514.djozup is now available Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain.devices.0"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: removing stray HostCache host record np0005538512.localdomain.devices.0 Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain.devices.0"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain.devices.0"}]': finished Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain.devices.0"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain.devices.0"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005538512.localdomain.devices.0"}]': finished Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/mirror_snapshot_schedule"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/mirror_snapshot_schedule"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/trash_purge_schedule"} : dispatch Nov 28 04:56:42 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538514.djozup/trash_purge_schedule"} : dispatch Nov 28 04:56:42 localhost podman[302206]: 2025-11-28 09:56:42.945358221 +0000 UTC m=+0.078463267 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , architecture=x86_64, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=553, vcs-type=git, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:56:43 localhost podman[302206]: 2025-11-28 09:56:43.123746237 +0000 UTC m=+0.256851263 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, vendor=Red Hat, Inc., release=553) Nov 28 04:56:44 localhost ceph-mon[301134]: [28/Nov/2025:09:56:43] ENGINE Bus STARTING Nov 28 04:56:44 localhost ceph-mon[301134]: [28/Nov/2025:09:56:43] ENGINE Serving on https://172.18.0.107:7150 Nov 28 04:56:44 localhost ceph-mon[301134]: [28/Nov/2025:09:56:43] ENGINE Client ('172.18.0.107', 59370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:56:44 localhost ceph-mon[301134]: [28/Nov/2025:09:56:43] ENGINE Serving on http://172.18.0.107:8765 Nov 28 04:56:44 localhost ceph-mon[301134]: [28/Nov/2025:09:56:43] ENGINE Bus STARTED Nov 28 04:56:44 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:44 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:44 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:44 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:44 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:44 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:45 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:45 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:45 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:56:46 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:56:46 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:56:46 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:56:46 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:56:46 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:46 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:46 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:56:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020036203 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:47 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:47 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:47 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:56:47 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:56:48 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:56:48 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:56:48 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:56:48 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:56:48 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:48 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:49 localhost ceph-mon[301134]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON) Nov 28 04:56:49 localhost ceph-mon[301134]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST) Nov 28 04:56:49 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch Nov 28 04:56:49 localhost ceph-mon[301134]: Reconfiguring daemon osd.2 on np0005538513.localdomain Nov 28 04:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:56:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:56:49 localhost podman[303110]: 2025-11-28 09:56:49.993559388 +0000 UTC m=+0.092280118 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 04:56:50 localhost podman[303110]: 2025-11-28 09:56:50.003247804 +0000 UTC m=+0.101968554 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:56:50 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:56:50 localhost podman[303111]: 2025-11-28 09:56:50.059240453 +0000 UTC m=+0.157685445 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 04:56:50 localhost podman[303112]: 2025-11-28 09:56:50.103741672 +0000 UTC m=+0.198708818 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 04:56:50 localhost podman[303112]: 2025-11-28 09:56:50.113651944 +0000 UTC m=+0.208619050 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 04:56:50 localhost podman[303111]: 2025-11-28 09:56:50.125005721 +0000 UTC m=+0.223450713 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 04:56:50 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:56:50 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:56:50 localhost podman[303113]: 2025-11-28 09:56:50.199746673 +0000 UTC m=+0.291722077 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:56:50 localhost podman[303113]: 2025-11-28 09:56:50.213656448 +0000 UTC m=+0.305631902 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:56:50 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:56:50 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:50 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:50 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:50 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:50 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:50 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538513.yljthc (monmap changed)... Nov 28 04:56:50 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538513.yljthc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:50 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538513.yljthc on np0005538513.localdomain Nov 28 04:56:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:56:50.839 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:56:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:56:50.840 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:56:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:56:50.840 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:56:50 localhost systemd[1]: tmp-crun.skBMxA.mount: Deactivated successfully. Nov 28 04:56:51 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:51 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:51 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:51 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538513.dsfdlx (monmap changed)... Nov 28 04:56:51 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538513.dsfdlx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:51 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538513.dsfdlx on np0005538513.localdomain Nov 28 04:56:51 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054204 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:52 localhost ceph-mon[301134]: Reconfiguring crash.np0005538514 (monmap changed)... Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538514.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:52 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538514 on np0005538514.localdomain Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:52 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch Nov 28 04:56:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:56:52 localhost podman[303195]: 2025-11-28 09:56:52.983029369 +0000 UTC m=+0.085631876 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:56:52 localhost podman[303195]: 2025-11-28 09:56:52.994793578 +0000 UTC m=+0.097396075 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:56:53 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:56:53 localhost ceph-mon[301134]: Reconfiguring osd.0 (monmap changed)... Nov 28 04:56:53 localhost ceph-mon[301134]: Reconfiguring daemon osd.0 on np0005538514.localdomain Nov 28 04:56:53 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0. Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.770448) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13 Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323814770632, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 12847, "num_deletes": 257, "total_data_size": 23230672, "memory_usage": 24400384, "flush_reason": "Manual Compaction"} Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: Reconfiguring osd.3 (monmap changed)... Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch Nov 28 04:56:54 localhost ceph-mon[301134]: Reconfiguring daemon osd.3 on np0005538514.localdomain Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323814906125, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 18014386, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 12852, "table_properties": {"data_size": 17946891, "index_size": 36669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 317092, "raw_average_key_size": 26, "raw_value_size": 17744895, "raw_average_value_size": 1492, "num_data_blocks": 1394, "num_entries": 11890, "num_filter_entries": 11890, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 1764323786, "file_creation_time": 1764323814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}} Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 135715 microseconds, and 38000 cpu microseconds. Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.906185) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 18014386 bytes OK Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.906211) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.907563) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.907586) EVENT_LOG_v1 {"time_micros": 1764323814907579, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0} Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.907605) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50 Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 23143546, prev total WAL file size 23174789, number of live WAL files 2. Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.911323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end) Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00 Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(17MB) 8(1762B)] Nov 28 04:56:54 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323814911421, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 18016148, "oldest_snapshot_seqno": -1} Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 11639 keys, 18010811 bytes, temperature: kUnknown Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323815040427, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 18010811, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17943987, "index_size": 36643, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29125, "raw_key_size": 312249, "raw_average_key_size": 26, "raw_value_size": 17745300, "raw_average_value_size": 1524, "num_data_blocks": 1394, "num_entries": 11639, "num_filter_entries": 11639, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764323814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}} Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:55.040943) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 18010811 bytes Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:55.042777) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.5 rd, 139.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(17.2, 0.0 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 11895, records dropped: 256 output_compression: NoCompression Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:55.042812) EVENT_LOG_v1 {"time_micros": 1764323815042797, "job": 4, "event": "compaction_finished", "compaction_time_micros": 129174, "compaction_time_cpu_micros": 50950, "output_level": 6, "num_output_files": 1, "total_output_size": 18010811, "num_input_records": 11895, "num_output_records": 11639, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323815045510, "job": 4, "event": "table_file_deletion", "file_number": 14} Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323815045569, "job": 4, "event": "table_file_deletion", "file_number": 8} Nov 28 04:56:55 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:56:54.911213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:56:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:55 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538514.umgtoy (monmap changed)... Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538514.umgtoy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:56:55 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538514.umgtoy on np0005538514.localdomain Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:55 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005538514.djozup", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch Nov 28 04:56:55 localhost podman[303218]: 2025-11-28 09:56:55.974556993 +0000 UTC m=+0.079350604 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 04:56:55 localhost podman[303218]: 2025-11-28 09:56:55.983835997 +0000 UTC m=+0.088629588 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 04:56:55 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:56:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054722 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:56:56 localhost ceph-mon[301134]: Saving service mon spec with placement label:mon Nov 28 04:56:56 localhost ceph-mon[301134]: Reconfiguring mgr.np0005538514.djozup (monmap changed)... Nov 28 04:56:56 localhost ceph-mon[301134]: Reconfiguring daemon mgr.np0005538514.djozup on np0005538514.localdomain Nov 28 04:56:56 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:56 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:56 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:56:57 localhost openstack_network_exporter[240973]: ERROR 09:56:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:56:57 localhost openstack_network_exporter[240973]: ERROR 09:56:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:56:57 localhost openstack_network_exporter[240973]: ERROR 09:56:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:56:57 localhost openstack_network_exporter[240973]: ERROR 09:56:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:56:57 localhost openstack_network_exporter[240973]: Nov 28 04:56:57 localhost openstack_network_exporter[240973]: ERROR 09:56:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:56:57 localhost openstack_network_exporter[240973]: Nov 28 04:56:58 localhost podman[303290]: Nov 28 04:56:58 localhost podman[303290]: 2025-11-28 09:56:58.180638106 +0000 UTC m=+0.082976385 container create 469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_ritchie, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.33.12, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0) Nov 28 04:56:58 localhost systemd[1]: Started libpod-conmon-469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f.scope. Nov 28 04:56:58 localhost systemd[1]: tmp-crun.9OVs9M.mount: Deactivated successfully. Nov 28 04:56:58 localhost systemd[1]: Started libcrun container. Nov 28 04:56:58 localhost podman[303290]: 2025-11-28 09:56:58.141395448 +0000 UTC m=+0.043733707 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:58 localhost podman[303290]: 2025-11-28 09:56:58.25052517 +0000 UTC m=+0.152863399 container init 469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_ritchie, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, version=7, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, RELEASE=main) Nov 28 04:56:58 localhost podman[303290]: 2025-11-28 09:56:58.260960469 +0000 UTC m=+0.163298758 container start 469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_ritchie, version=7, io.openshift.tags=rhceph ceph, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-type=git, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, architecture=x86_64) Nov 28 04:56:58 localhost podman[303290]: 2025-11-28 09:56:58.261266968 +0000 UTC m=+0.163605207 container attach 469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_ritchie, io.buildah.version=1.33.12, maintainer=Guillaume Abrioux , vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, RELEASE=main, name=rhceph, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:56:58 localhost systemd[1]: libpod-469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f.scope: Deactivated successfully. Nov 28 04:56:58 localhost serene_ritchie[303305]: 167 167 Nov 28 04:56:58 localhost podman[303290]: 2025-11-28 09:56:58.26625399 +0000 UTC m=+0.168592229 container died 469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_ritchie, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, CEPH_POINT_RELEASE=, release=553, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vendor=Red Hat, Inc., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_BRANCH=main, maintainer=Guillaume Abrioux , description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, ceph=True, io.buildah.version=1.33.12, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64) Nov 28 04:56:58 localhost podman[303310]: 2025-11-28 09:56:58.367888483 +0000 UTC m=+0.087134401 container remove 469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_ritchie, GIT_BRANCH=main, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux , url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph) Nov 28 04:56:58 localhost systemd[1]: libpod-conmon-469c94809154f654ce97900746bdb27238d67a1707e2a59d99c6dbfd972af64f.scope: Deactivated successfully. Nov 28 04:56:58 localhost ceph-mon[301134]: Reconfiguring mon.np0005538514 (monmap changed)... Nov 28 04:56:58 localhost ceph-mon[301134]: Reconfiguring daemon mon.np0005538514 on np0005538514.localdomain Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:58 localhost ceph-mon[301134]: Reconfiguring crash.np0005538515 (monmap changed)... Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005538515.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch Nov 28 04:56:58 localhost ceph-mon[301134]: Reconfiguring daemon crash.np0005538515 on np0005538515.localdomain Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:58 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch Nov 28 04:56:58 localhost podman[239012]: time="2025-11-28T09:56:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:56:58 localhost podman[239012]: @ - - [28/Nov/2025:09:56:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:56:58 localhost podman[239012]: @ - - [28/Nov/2025:09:56:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19187 "" "Go-http-client/1.1" Nov 28 04:56:59 localhost podman[303379]: Nov 28 04:56:59 localhost podman[303379]: 2025-11-28 09:56:59.145106912 +0000 UTC m=+0.074068292 container create 318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_elion, maintainer=Guillaume Abrioux , name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, release=553, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., io.buildah.version=1.33.12, com.redhat.component=rhceph-container) Nov 28 04:56:59 localhost systemd[1]: Started libpod-conmon-318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa.scope. Nov 28 04:56:59 localhost systemd[1]: var-lib-containers-storage-overlay-3a7566c280c0a72a6b2e5d072bb358d2cae274ead210aed0a72f210cb175d1ef-merged.mount: Deactivated successfully. Nov 28 04:56:59 localhost podman[303379]: 2025-11-28 09:56:59.11392416 +0000 UTC m=+0.042885540 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:56:59 localhost systemd[1]: Started libcrun container. Nov 28 04:56:59 localhost podman[303379]: 2025-11-28 09:56:59.231323264 +0000 UTC m=+0.160284634 container init 318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_elion, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, ceph=True, io.openshift.expose-services=) Nov 28 04:56:59 localhost podman[303379]: 2025-11-28 09:56:59.24230493 +0000 UTC m=+0.171266300 container start 318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_elion, ceph=True, vendor=Red Hat, Inc., name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, RELEASE=main, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, architecture=x86_64, release=553, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Nov 28 04:56:59 localhost podman[303379]: 2025-11-28 09:56:59.242753993 +0000 UTC m=+0.171715363 container attach 318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_elion, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.33.12, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=553, build-date=2025-09-24T08:57:55, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:56:59 localhost silly_elion[303394]: 167 167 Nov 28 04:56:59 localhost systemd[1]: libpod-318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa.scope: Deactivated successfully. Nov 28 04:56:59 localhost podman[303379]: 2025-11-28 09:56:59.24627421 +0000 UTC m=+0.175235610 container died 318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_elion, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.buildah.version=1.33.12, vendor=Red Hat, Inc., version=7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.license_terms=https://www.redhat.com/agreements, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, release=553, RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, ceph=True, GIT_CLEAN=True, maintainer=Guillaume Abrioux , build-date=2025-09-24T08:57:55, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container) Nov 28 04:56:59 localhost podman[303399]: 2025-11-28 09:56:59.324015474 +0000 UTC m=+0.070251636 container remove 318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_elion, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.expose-services=, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_CLEAN=True, io.buildah.version=1.33.12, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/agreements, distribution-scope=public, release=553, io.openshift.tags=rhceph ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:56:59 localhost systemd[1]: libpod-conmon-318493ff4cf94b48366183f04405bf046344f3366211f2290ea49e9953e8f9aa.scope: Deactivated successfully. Nov 28 04:56:59 localhost ceph-mon[301134]: Reconfiguring osd.1 (monmap changed)... Nov 28 04:56:59 localhost ceph-mon[301134]: Reconfiguring daemon osd.1 on np0005538515.localdomain Nov 28 04:56:59 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:59 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:56:59 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:00 localhost podman[303476]: Nov 28 04:57:00 localhost systemd[1]: var-lib-containers-storage-overlay-360fbb81d2a782204a10bfef9b27021e137add83d6bf94599a0f17d16f2375ef-merged.mount: Deactivated successfully. Nov 28 04:57:00 localhost podman[303476]: 2025-11-28 09:57:00.199764472 +0000 UTC m=+0.074567518 container create b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, maintainer=Guillaume Abrioux , ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:57:00 localhost systemd[1]: Started libpod-conmon-b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383.scope. Nov 28 04:57:00 localhost systemd[1]: Started libcrun container. Nov 28 04:57:00 localhost podman[303476]: 2025-11-28 09:57:00.26556436 +0000 UTC m=+0.140367406 container init b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, io.openshift.tags=rhceph ceph, release=553, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux , version=7, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.33.12, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vcs-type=git, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:57:00 localhost podman[303476]: 2025-11-28 09:57:00.169418535 +0000 UTC m=+0.044221631 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:57:00 localhost podman[303476]: 2025-11-28 09:57:00.274631037 +0000 UTC m=+0.149434093 container start b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, build-date=2025-09-24T08:57:55, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux , name=rhceph, com.redhat.component=rhceph-container, release=553, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_CLEAN=True, version=7, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git) Nov 28 04:57:00 localhost podman[303476]: 2025-11-28 09:57:00.275905516 +0000 UTC m=+0.150708532 container attach b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, release=553, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, maintainer=Guillaume Abrioux , RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/agreements, name=rhceph, ceph=True, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.33.12) Nov 28 04:57:00 localhost wizardly_curie[303492]: 167 167 Nov 28 04:57:00 localhost systemd[1]: libpod-b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383.scope: Deactivated successfully. Nov 28 04:57:00 localhost podman[303476]: 2025-11-28 09:57:00.279385462 +0000 UTC m=+0.154188538 container died b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux , distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=553, CEPH_POINT_RELEASE=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55) Nov 28 04:57:00 localhost podman[303497]: 2025-11-28 09:57:00.365594224 +0000 UTC m=+0.078544179 container remove b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/agreements, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=553, version=7, description=Red Hat Ceph Storage 7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.33.12, distribution-scope=public, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:57:00 localhost systemd[1]: libpod-conmon-b9786a901ee1086bf612daca4622bcf62e049248dff8d43507d790d481eb6383.scope: Deactivated successfully. Nov 28 04:57:00 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:00 localhost ceph-mon[301134]: Reconfiguring osd.4 (monmap changed)... Nov 28 04:57:00 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch Nov 28 04:57:00 localhost ceph-mon[301134]: Reconfiguring daemon osd.4 on np0005538515.localdomain Nov 28 04:57:00 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:01 localhost systemd[1]: var-lib-containers-storage-overlay-0687bdb07c7ae93399a19b7e5f84f8193c66beccbcc7234c7ab80fcf4a24fcb4-merged.mount: Deactivated successfully. Nov 28 04:57:01 localhost podman[303572]: Nov 28 04:57:01 localhost podman[303572]: 2025-11-28 09:57:01.237797483 +0000 UTC m=+0.079182548 container create cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gould, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, ceph=True, io.buildah.version=1.33.12, version=7, release=553, maintainer=Guillaume Abrioux , distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:57:01 localhost systemd[1]: Started libpod-conmon-cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b.scope. Nov 28 04:57:01 localhost systemd[1]: Started libcrun container. Nov 28 04:57:01 localhost podman[303572]: 2025-11-28 09:57:01.203899919 +0000 UTC m=+0.045285014 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:57:01 localhost podman[303572]: 2025-11-28 09:57:01.311220735 +0000 UTC m=+0.152605790 container init cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gould, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_CLEAN=True, io.buildah.version=1.33.12, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, io.openshift.expose-services=, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 04:57:01 localhost podman[303572]: 2025-11-28 09:57:01.319459266 +0000 UTC m=+0.160844321 container start cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gould, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/agreements, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.33.12, release=553, build-date=2025-09-24T08:57:55, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux , io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git) Nov 28 04:57:01 localhost podman[303572]: 2025-11-28 09:57:01.319736506 +0000 UTC m=+0.161121561 container attach cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gould, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, distribution-scope=public, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.buildah.version=1.33.12, RELEASE=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux , architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7) Nov 28 04:57:01 localhost magical_gould[303587]: 167 167 Nov 28 04:57:01 localhost systemd[1]: libpod-cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b.scope: Deactivated successfully. Nov 28 04:57:01 localhost podman[303572]: 2025-11-28 09:57:01.322653855 +0000 UTC m=+0.164038930 container died cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gould, maintainer=Guillaume Abrioux , release=553, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/agreements, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, build-date=2025-09-24T08:57:55, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3) Nov 28 04:57:01 localhost podman[303592]: 2025-11-28 09:57:01.413455637 +0000 UTC m=+0.082163710 container remove cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gould, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.buildah.version=1.33.12, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-09-24T08:57:55, release=553, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/agreements, version=7) Nov 28 04:57:01 localhost systemd[1]: libpod-conmon-cf7c048209a7a2890d9109dbdeb2dd4f41966258875814b3b86f78caed3f384b.scope: Deactivated successfully. Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:57:01 localhost ceph-mon[301134]: Reconfiguring mds.mds.np0005538515.anvatb (monmap changed)... Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005538515.anvatb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch Nov 28 04:57:01 localhost ceph-mon[301134]: Reconfiguring daemon mds.mds.np0005538515.anvatb on np0005538515.localdomain Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:01 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:57:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:02 localhost podman[303662]: Nov 28 04:57:02 localhost podman[303662]: 2025-11-28 09:57:02.141427742 +0000 UTC m=+0.074508096 container create 5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_banach, name=rhceph, maintainer=Guillaume Abrioux , org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_BRANCH=main, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_CLEAN=True, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., version=7, io.buildah.version=1.33.12, com.redhat.license_terms=https://www.redhat.com/agreements, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhceph ceph) Nov 28 04:57:02 localhost systemd[1]: Started libpod-conmon-5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc.scope. Nov 28 04:57:02 localhost systemd[1]: Started libcrun container. Nov 28 04:57:02 localhost systemd[1]: var-lib-containers-storage-overlay-95062f8b9d46047cfa7542ca93ec336b93848a418b3f12dfdef1ca065f86b3bf-merged.mount: Deactivated successfully. Nov 28 04:57:02 localhost podman[303662]: 2025-11-28 09:57:02.201742783 +0000 UTC m=+0.134823137 container init 5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_banach, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, distribution-scope=public, io.buildah.version=1.33.12, GIT_BRANCH=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, architecture=x86_64, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, name=rhceph) Nov 28 04:57:02 localhost podman[303662]: 2025-11-28 09:57:02.111462177 +0000 UTC m=+0.044542601 image pull registry.redhat.io/rhceph/rhceph-7-rhel9:latest Nov 28 04:57:02 localhost systemd[1]: tmp-crun.ieDyM4.mount: Deactivated successfully. Nov 28 04:57:02 localhost podman[303662]: 2025-11-28 09:57:02.214933686 +0000 UTC m=+0.148014040 container start 5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_banach, io.openshift.expose-services=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, io.openshift.tags=rhceph ceph, release=553, GIT_BRANCH=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.33.12, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, vendor=Red Hat, Inc., build-date=2025-09-24T08:57:55, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux , distribution-scope=public, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553) Nov 28 04:57:02 localhost podman[303662]: 2025-11-28 09:57:02.215466742 +0000 UTC m=+0.148547136 container attach 5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_banach, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.33.12, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, vcs-type=git, maintainer=Guillaume Abrioux , distribution-scope=public, name=rhceph, release=553, com.redhat.license_terms=https://www.redhat.com/agreements, RELEASE=main, GIT_BRANCH=main, build-date=2025-09-24T08:57:55, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9) Nov 28 04:57:02 localhost lucid_banach[303677]: 167 167 Nov 28 04:57:02 localhost systemd[1]: libpod-5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc.scope: Deactivated successfully. Nov 28 04:57:02 localhost podman[303662]: 2025-11-28 09:57:02.218818705 +0000 UTC m=+0.151899079 container died 5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_banach, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2025-09-24T08:57:55, description=Red Hat Ceph Storage 7, io.buildah.version=1.33.12, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux , summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=553, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, architecture=x86_64) Nov 28 04:57:02 localhost systemd[1]: tmp-crun.nPY5ev.mount: Deactivated successfully. Nov 28 04:57:02 localhost podman[303682]: 2025-11-28 09:57:02.318394185 +0000 UTC m=+0.091397722 container remove 5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=lucid_banach, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux , GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, version=7, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.33.12, com.redhat.component=rhceph-container, build-date=2025-09-24T08:57:55, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/agreements) Nov 28 04:57:02 localhost systemd[1]: libpod-conmon-5e3878dd50e525dff491b0b4fbeb79c7c401669727527736dc5289bf5a881ebc.scope: Deactivated successfully. Nov 28 04:57:02 localhost ceph-mon[301134]: Reconfiguring mon.np0005538515 (monmap changed)... Nov 28 04:57:02 localhost ceph-mon[301134]: Reconfiguring daemon mon.np0005538515 on np0005538515.localdomain Nov 28 04:57:03 localhost systemd[1]: var-lib-containers-storage-overlay-13f3cbae1de9f99ee9b8d0449455f9a474d8c3eedd97ff6a9b979ff9e18faeca-merged.mount: Deactivated successfully. Nov 28 04:57:03 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:03 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:03 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:57:03 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:03 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:03 localhost ceph-mon[301134]: Reconfiguring mon.np0005538513 (monmap changed)... Nov 28 04:57:03 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "mon."} : dispatch Nov 28 04:57:03 localhost ceph-mon[301134]: Reconfiguring daemon mon.np0005538513 on np0005538513.localdomain Nov 28 04:57:05 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:05 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:05 localhost systemd[1]: session-69.scope: Deactivated successfully. Nov 28 04:57:05 localhost systemd[1]: session-69.scope: Consumed 1.713s CPU time. Nov 28 04:57:05 localhost systemd-logind[763]: Session 69 logged out. Waiting for processes to exit. Nov 28 04:57:05 localhost systemd-logind[763]: Removed session 69. Nov 28 04:57:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:07 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:57:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:57:11 localhost systemd[1]: tmp-crun.yr4esi.mount: Deactivated successfully. Nov 28 04:57:11 localhost podman[303717]: 2025-11-28 09:57:11.981036584 +0000 UTC m=+0.082198760 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible) Nov 28 04:57:11 localhost podman[303717]: 2025-11-28 09:57:11.996574919 +0000 UTC m=+0.097737135 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=) Nov 28 04:57:12 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:57:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 04:57:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3967776959' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 04:57:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 04:57:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3967776959' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 04:57:15 localhost systemd[1]: Stopping User Manager for UID 1003... Nov 28 04:57:15 localhost systemd[299423]: Activating special unit Exit the Session... Nov 28 04:57:15 localhost systemd[299423]: Stopped target Main User Target. Nov 28 04:57:15 localhost systemd[299423]: Stopped target Basic System. Nov 28 04:57:15 localhost systemd[299423]: Stopped target Paths. Nov 28 04:57:15 localhost systemd[299423]: Stopped target Sockets. Nov 28 04:57:15 localhost systemd[299423]: Stopped target Timers. Nov 28 04:57:15 localhost systemd[299423]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 28 04:57:15 localhost systemd[299423]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 04:57:15 localhost systemd[299423]: Closed D-Bus User Message Bus Socket. Nov 28 04:57:15 localhost systemd[299423]: Stopped Create User's Volatile Files and Directories. Nov 28 04:57:15 localhost systemd[299423]: Removed slice User Application Slice. Nov 28 04:57:15 localhost systemd[299423]: Reached target Shutdown. Nov 28 04:57:15 localhost systemd[299423]: Finished Exit the Session. Nov 28 04:57:15 localhost systemd[299423]: Reached target Exit the Session. Nov 28 04:57:15 localhost systemd[1]: user@1003.service: Deactivated successfully. Nov 28 04:57:15 localhost systemd[1]: Stopped User Manager for UID 1003. Nov 28 04:57:15 localhost systemd[1]: Stopping User Runtime Directory /run/user/1003... Nov 28 04:57:15 localhost systemd[1]: run-user-1003.mount: Deactivated successfully. Nov 28 04:57:15 localhost systemd[1]: user-runtime-dir@1003.service: Deactivated successfully. Nov 28 04:57:15 localhost systemd[1]: Stopped User Runtime Directory /run/user/1003. Nov 28 04:57:15 localhost systemd[1]: Removed slice User Slice of UID 1003. Nov 28 04:57:15 localhost systemd[1]: user-1003.slice: Consumed 2.299s CPU time. Nov 28 04:57:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:57:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:57:20 localhost systemd[297255]: Starting Mark boot as successful... Nov 28 04:57:20 localhost systemd[1]: tmp-crun.Ppi2e9.mount: Deactivated successfully. Nov 28 04:57:21 localhost systemd[297255]: Finished Mark boot as successful. Nov 28 04:57:21 localhost podman[303738]: 2025-11-28 09:57:21.006280934 +0000 UTC m=+0.108493274 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm) Nov 28 04:57:21 localhost podman[303738]: 2025-11-28 09:57:21.042892851 +0000 UTC m=+0.145105191 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:57:21 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:57:21 localhost podman[303739]: 2025-11-28 09:57:21.04875251 +0000 UTC m=+0.150636370 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 04:57:21 localhost podman[303741]: 2025-11-28 09:57:21.154271692 +0000 UTC m=+0.246434865 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 04:57:21 localhost podman[303741]: 2025-11-28 09:57:21.165491085 +0000 UTC m=+0.257654288 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:57:21 localhost podman[303740]: 2025-11-28 09:57:21.124908965 +0000 UTC m=+0.219349267 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 04:57:21 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:57:21 localhost podman[303740]: 2025-11-28 09:57:21.207333862 +0000 UTC m=+0.301774184 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 04:57:21 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:57:21 localhost podman[303739]: 2025-11-28 09:57:21.228959262 +0000 UTC m=+0.330843172 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 04:57:21 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:57:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:57:23 localhost podman[303821]: 2025-11-28 09:57:23.953737073 +0000 UTC m=+0.060334483 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 04:57:23 localhost podman[303821]: 2025-11-28 09:57:23.958431306 +0000 UTC m=+0.065028686 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:57:23 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:57:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:57:26 localhost podman[303846]: 2025-11-28 09:57:26.972910211 +0000 UTC m=+0.081073626 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:57:26 localhost podman[303846]: 2025-11-28 09:57:26.986376473 +0000 UTC m=+0.094539888 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 04:57:27 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:57:27 localhost openstack_network_exporter[240973]: ERROR 09:57:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:57:27 localhost openstack_network_exporter[240973]: ERROR 09:57:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:57:27 localhost openstack_network_exporter[240973]: ERROR 09:57:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:57:27 localhost openstack_network_exporter[240973]: ERROR 09:57:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:57:27 localhost openstack_network_exporter[240973]: Nov 28 04:57:27 localhost openstack_network_exporter[240973]: ERROR 09:57:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:57:27 localhost openstack_network_exporter[240973]: Nov 28 04:57:28 localhost podman[239012]: time="2025-11-28T09:57:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:57:28 localhost podman[239012]: @ - - [28/Nov/2025:09:57:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:57:28 localhost podman[239012]: @ - - [28/Nov/2025:09:57:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19185 "" "Go-http-client/1.1" Nov 28 04:57:30 localhost nova_compute[280168]: 2025-11-28 09:57:30.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:32 localhost nova_compute[280168]: 2025-11-28 09:57:32.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:33 localhost nova_compute[280168]: 2025-11-28 09:57:33.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:33 localhost nova_compute[280168]: 2025-11-28 09:57:33.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:57:34 localhost nova_compute[280168]: 2025-11-28 09:57:34.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:35 localhost nova_compute[280168]: 2025-11-28 09:57:35.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:35 localhost nova_compute[280168]: 2025-11-28 09:57:35.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:57:35 localhost nova_compute[280168]: 2025-11-28 09:57:35.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:57:35 localhost nova_compute[280168]: 2025-11-28 09:57:35.357 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:57:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:37 localhost nova_compute[280168]: 2025-11-28 09:57:37.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:37 localhost nova_compute[280168]: 2025-11-28 09:57:37.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.262 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.263 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.263 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.263 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.263 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:57:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:57:39 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1903764416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.701 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.898 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.899 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12044MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.899 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.899 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.983 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:57:39 localhost nova_compute[280168]: 2025-11-28 09:57:39.984 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:57:40 localhost nova_compute[280168]: 2025-11-28 09:57:40.020 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:57:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:57:40 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/279813176' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:57:40 localhost nova_compute[280168]: 2025-11-28 09:57:40.469 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:57:40 localhost nova_compute[280168]: 2025-11-28 09:57:40.475 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:57:40 localhost nova_compute[280168]: 2025-11-28 09:57:40.492 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:57:40 localhost nova_compute[280168]: 2025-11-28 09:57:40.495 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:57:40 localhost nova_compute[280168]: 2025-11-28 09:57:40.495 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:57:41 localhost sshd[303910]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:57:41 localhost nova_compute[280168]: 2025-11-28 09:57:41.492 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:41 localhost nova_compute[280168]: 2025-11-28 09:57:41.492 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:57:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:42 localhost sshd[303911]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:57:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:57:42 localhost podman[303912]: 2025-11-28 09:57:42.975390102 +0000 UTC m=+0.079748905 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, release=1755695350, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm) Nov 28 04:57:42 localhost podman[303912]: 2025-11-28 09:57:42.98776404 +0000 UTC m=+0.092122813 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, config_id=edpm, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal) Nov 28 04:57:43 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:57:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:57:50.841 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:57:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:57:50.841 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:57:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:57:50.842 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:57:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:57:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:57:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:57:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:57:51 localhost podman[303932]: 2025-11-28 09:57:51.970941222 +0000 UTC m=+0.079778315 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:57:51 localhost podman[303932]: 2025-11-28 09:57:51.982312716 +0000 UTC m=+0.091149819 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0) Nov 28 04:57:51 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:57:52 localhost podman[303937]: 2025-11-28 09:57:51.985817913 +0000 UTC m=+0.084157449 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:57:52 localhost podman[303933]: 2025-11-28 09:57:52.045284232 +0000 UTC m=+0.145357969 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 04:57:52 localhost podman[303937]: 2025-11-28 09:57:52.064778912 +0000 UTC m=+0.163118468 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:57:52 localhost podman[303940]: 2025-11-28 09:57:52.023294877 +0000 UTC m=+0.120929461 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:57:52 localhost podman[303940]: 2025-11-28 09:57:52.108444223 +0000 UTC m=+0.206078857 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:57:52 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:57:52 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:57:52 localhost podman[303933]: 2025-11-28 09:57:52.12817234 +0000 UTC m=+0.228246127 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 04:57:52 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:57:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:57:54 localhost podman[304015]: 2025-11-28 09:57:54.976487261 +0000 UTC m=+0.083581890 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:57:55 localhost podman[304015]: 2025-11-28 09:57:55.01245492 +0000 UTC m=+0.119549559 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:57:55 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:57:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:57:57 localhost openstack_network_exporter[240973]: ERROR 09:57:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:57:57 localhost openstack_network_exporter[240973]: ERROR 09:57:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:57:57 localhost openstack_network_exporter[240973]: ERROR 09:57:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:57:57 localhost openstack_network_exporter[240973]: ERROR 09:57:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:57:57 localhost openstack_network_exporter[240973]: Nov 28 04:57:57 localhost openstack_network_exporter[240973]: ERROR 09:57:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:57:57 localhost openstack_network_exporter[240973]: Nov 28 04:57:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:57:57 localhost podman[304039]: 2025-11-28 09:57:57.981280158 +0000 UTC m=+0.089885512 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 04:57:57 localhost podman[304039]: 2025-11-28 09:57:57.990763434 +0000 UTC m=+0.099368788 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd) Nov 28 04:57:58 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:57:58 localhost podman[239012]: time="2025-11-28T09:57:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:57:58 localhost podman[239012]: @ - - [28/Nov/2025:09:57:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:57:58 localhost podman[239012]: @ - - [28/Nov/2025:09:57:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19188 "" "Go-http-client/1.1" Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 09:58:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 04:58:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 e92: 6 total, 6 up, 6 in Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr handle_mgr_map Activating! Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr handle_mgr_map I am now activating Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538513"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538513"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538514"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538514"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon metadata", "id": "np0005538515"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata", "id": "np0005538515"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005538513.yljthc"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mds metadata", "who": "mds.np0005538513.yljthc"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).mds e17 all = 0 Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005538515.anvatb"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mds metadata", "who": "mds.np0005538515.anvatb"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).mds e17 all = 0 Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005538514.umgtoy"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mds metadata", "who": "mds.np0005538514.umgtoy"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).mds e17 all = 0 Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005538515.yfkzhl", "id": "np0005538515.yfkzhl"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr metadata", "who": "np0005538515.yfkzhl", "id": "np0005538515.yfkzhl"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005538513.dsfdlx", "id": "np0005538513.dsfdlx"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr metadata", "who": "np0005538513.dsfdlx", "id": "np0005538513.dsfdlx"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata", "id": 0} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata", "id": 1} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata", "id": 2} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata", "id": 3} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata", "id": 4} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata", "id": 5} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mds metadata"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mds metadata"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).mds e17 all = 1 Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd metadata"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd metadata"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon metadata"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mon metadata"} : dispatch Nov 28 04:58:05 localhost ceph-mgr[286188]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: balancer Nov 28 04:58:05 localhost ceph-mgr[286188]: [balancer INFO root] Starting Nov 28 04:58:05 localhost ceph-mgr[286188]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_09:58:05 Nov 28 04:58:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 04:58:05 localhost ceph-mgr[286188]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later Nov 28 04:58:05 localhost systemd[1]: session-71.scope: Deactivated successfully. Nov 28 04:58:05 localhost systemd[1]: session-71.scope: Consumed 10.660s CPU time. Nov 28 04:58:05 localhost systemd-logind[763]: Session 71 logged out. Waiting for processes to exit. Nov 28 04:58:05 localhost systemd-logind[763]: Removed session 71. Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: cephadm Nov 28 04:58:05 localhost ceph-mgr[286188]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: crash Nov 28 04:58:05 localhost ceph-mgr[286188]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: devicehealth Nov 28 04:58:05 localhost ceph-mgr[286188]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: iostat Nov 28 04:58:05 localhost ceph-mgr[286188]: [devicehealth INFO root] Starting Nov 28 04:58:05 localhost ceph-mgr[286188]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: nfs Nov 28 04:58:05 localhost ceph-mgr[286188]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: orchestrator Nov 28 04:58:05 localhost ceph-mgr[286188]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: pg_autoscaler Nov 28 04:58:05 localhost ceph-mgr[286188]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: progress Nov 28 04:58:05 localhost ceph-mgr[286188]: [progress INFO root] Loading... Nov 28 04:58:05 localhost ceph-mgr[286188]: [progress INFO root] Loaded [, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ] historic events Nov 28 04:58:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 04:58:05 localhost ceph-mgr[286188]: [progress INFO root] Loaded OSDMap, ready. Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] recovery thread starting Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] starting setup Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: rbd_support Nov 28 04:58:05 localhost ceph-mgr[286188]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: restful Nov 28 04:58:05 localhost ceph-mgr[286188]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: status Nov 28 04:58:05 localhost ceph-mgr[286188]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: telemetry Nov 28 04:58:05 localhost ceph-mgr[286188]: [restful INFO root] server_addr: :: server_port: 8003 Nov 28 04:58:05 localhost ceph-mgr[286188]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5) Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/mirror_snapshot_schedule"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/mirror_snapshot_schedule"} : dispatch Nov 28 04:58:05 localhost ceph-mgr[286188]: [restful WARNING root] server not running: no certificate configured Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 04:58:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 04:58:05 localhost ceph-mgr[286188]: mgr load Constructed class from module: volumes Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.689+0000 7fcc8cc53640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.689+0000 7fcc8cc53640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.689+0000 7fcc8cc53640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.689+0000 7fcc8cc53640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.689+0000 7fcc8cc53640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:58:05 localhost ceph-mon[301134]: from='mgr.34348 172.18.0.107:0/1408760265' entity='mgr.np0005538514.djozup' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: from='mgr.34348 ' entity='mgr.np0005538514.djozup' Nov 28 04:58:05 localhost ceph-mon[301134]: from='client.? 172.18.0.200:0/1630948378' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: Activating manager daemon np0005538515.yfkzhl Nov 28 04:58:05 localhost ceph-mon[301134]: from='client.? 172.18.0.200:0/1630948378' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished Nov 28 04:58:05 localhost ceph-mon[301134]: Manager daemon np0005538515.yfkzhl is now available Nov 28 04:58:05 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/mirror_snapshot_schedule"} : dispatch Nov 28 04:58:05 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/mirror_snapshot_schedule"} : dispatch Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.693+0000 7fcc89c4d640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.693+0000 7fcc89c4d640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.693+0000 7fcc89c4d640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.693+0000 7fcc89c4d640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T09:58:05.693+0000 7fcc89c4d640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] PerfHandler: starting Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: vms, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: volumes, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: images, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_task_task: backups, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TaskHandler: starting Nov 28 04:58:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/trash_purge_schedule"} v 0) Nov 28 04:58:05 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/trash_purge_schedule"} : dispatch Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting Nov 28 04:58:05 localhost ceph-mgr[286188]: [rbd_support INFO root] setup complete Nov 28 04:58:05 localhost sshd[304282]: main: sshd: ssh-rsa algorithm is disabled Nov 28 04:58:05 localhost systemd-logind[763]: New session 72 of user ceph-admin. Nov 28 04:58:05 localhost systemd[1]: Started Session 72 of User ceph-admin. Nov 28 04:58:06 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:06 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/trash_purge_schedule"} : dispatch Nov 28 04:58:06 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005538515.yfkzhl/trash_purge_schedule"} : dispatch Nov 28 04:58:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:06 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:58:06] ENGINE Bus STARTING Nov 28 04:58:06 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:58:06] ENGINE Bus STARTING Nov 28 04:58:06 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:58:06] ENGINE Serving on http://172.18.0.108:8765 Nov 28 04:58:06 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:58:06] ENGINE Serving on http://172.18.0.108:8765 Nov 28 04:58:06 localhost systemd[1]: tmp-crun.VgD5Vk.mount: Deactivated successfully. Nov 28 04:58:06 localhost podman[304402]: 2025-11-28 09:58:06.980779455 +0000 UTC m=+0.094733667 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, build-date=2025-09-24T08:57:55, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, release=553, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux , CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.12, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/agreements, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7) Nov 28 04:58:07 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:58:07] ENGINE Serving on https://172.18.0.108:7150 Nov 28 04:58:07 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:58:07] ENGINE Serving on https://172.18.0.108:7150 Nov 28 04:58:07 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:58:07] ENGINE Bus STARTED Nov 28 04:58:07 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:58:07] ENGINE Bus STARTED Nov 28 04:58:07 localhost ceph-mgr[286188]: [cephadm INFO cherrypy.error] [28/Nov/2025:09:58:07] ENGINE Client ('172.18.0.108', 56132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:58:07 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : [28/Nov/2025:09:58:07] ENGINE Client ('172.18.0.108', 56132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:58:07 localhost podman[304402]: 2025-11-28 09:58:07.074722519 +0000 UTC m=+0.188676701 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-09-24T08:57:55, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=553, name=rhceph, io.buildah.version=1.33.12, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/agreements, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux , io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, com.redhat.component=rhceph-container) Nov 28 04:58:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:58:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:58:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:58:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:58:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:58:07 localhost ceph-mon[301134]: [28/Nov/2025:09:58:06] ENGINE Bus STARTING Nov 28 04:58:07 localhost ceph-mon[301134]: [28/Nov/2025:09:58:06] ENGINE Serving on http://172.18.0.108:8765 Nov 28 04:58:07 localhost ceph-mon[301134]: [28/Nov/2025:09:58:07] ENGINE Serving on https://172.18.0.108:7150 Nov 28 04:58:07 localhost ceph-mon[301134]: [28/Nov/2025:09:58:07] ENGINE Bus STARTED Nov 28 04:58:07 localhost ceph-mon[301134]: [28/Nov/2025:09:58:07] ENGINE Client ('172.18.0.108', 56132) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') Nov 28 04:58:07 localhost ceph-mon[301134]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm) Nov 28 04:58:07 localhost ceph-mon[301134]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm) Nov 28 04:58:07 localhost ceph-mon[301134]: Cluster is now healthy Nov 28 04:58:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:58:07 localhost ceph-mgr[286188]: [devicehealth INFO root] Check health Nov 28 04:58:08 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:08 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:08 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:58:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:58:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Nov 28 04:58:08 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:58:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Nov 28 04:58:08 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:58:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 04:58:09 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 04:58:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:58:09 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:09 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:10 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:10 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:10 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:10 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:10 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:10 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:10 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:10 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:10 localhost ceph-mgr[286188]: mgr.server handle_open ignoring open from mgr.np0005538514.djozup 172.18.0.107:0/3915016929; not ready for session (expect reconnect) Nov 28 04:58:10 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:10 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:11 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:11 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.conf Nov 28 04:58:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005538514.djozup", "id": "np0005538514.djozup"} v 0) Nov 28 04:58:11 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "mgr metadata", "who": "np0005538514.djozup", "id": "np0005538514.djozup"} : dispatch Nov 28 04:58:11 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mgr[286188]: [cephadm INFO cephadm.serve] Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 04:58:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:58:12 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 8e2b0e75-f61a-4829-9dff-75d4bcf68e11 (Updating node-proxy deployment (+3 -> 3)) Nov 28 04:58:12 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 8e2b0e75-f61a-4829-9dff-75d4bcf68e11 (Updating node-proxy deployment (+3 -> 3)) Nov 28 04:58:12 localhost ceph-mgr[286188]: [progress INFO root] Completed event 8e2b0e75-f61a-4829-9dff-75d4bcf68e11 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:58:12 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:12 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:12 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/etc/ceph/ceph.client.admin.keyring Nov 28 04:58:12 localhost ceph-mon[301134]: Updating np0005538514.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:12 localhost ceph-mon[301134]: Updating np0005538515.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:58:12 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev b8f44057-e882-4490-b8d1-70c34e1686a0 (Updating node-proxy deployment (+3 -> 3)) Nov 28 04:58:12 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev b8f44057-e882-4490-b8d1-70c34e1686a0 (Updating node-proxy deployment (+3 -> 3)) Nov 28 04:58:12 localhost ceph-mgr[286188]: [progress INFO root] Completed event b8f44057-e882-4490-b8d1-70c34e1686a0 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 04:58:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:58:12 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:58:13 localhost ceph-mon[301134]: Updating np0005538513.localdomain:/var/lib/ceph/2c5417c9-00eb-57d5-a565-ddecbc7995c1/config/ceph.client.admin.keyring Nov 28 04:58:13 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:58:13 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:58:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s Nov 28 04:58:13 localhost podman[305342]: 2025-11-28 09:58:13.639284605 +0000 UTC m=+0.085591660 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6) Nov 28 04:58:13 localhost podman[305342]: 2025-11-28 09:58:13.68009185 +0000 UTC m=+0.126398845 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, container_name=openstack_network_exporter) Nov 28 04:58:13 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:58:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s Nov 28 04:58:15 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:58:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:58:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:58:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s Nov 28 04:58:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 28 04:58:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 28 04:58:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:58:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:58:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:58:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:58:23 localhost podman[305362]: 2025-11-28 09:58:23.001610302 +0000 UTC m=+0.101862913 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2) Nov 28 04:58:23 localhost podman[305364]: 2025-11-28 09:58:23.052296536 +0000 UTC m=+0.148625499 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 04:58:23 localhost podman[305364]: 2025-11-28 09:58:23.086438019 +0000 UTC m=+0.182767002 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:58:23 localhost podman[305363]: 2025-11-28 09:58:23.099558016 +0000 UTC m=+0.195910989 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:58:23 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:58:23 localhost podman[305362]: 2025-11-28 09:58:23.119312894 +0000 UTC m=+0.219565455 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:58:23 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:58:23 localhost podman[305363]: 2025-11-28 09:58:23.166683327 +0000 UTC m=+0.263036320 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible) Nov 28 04:58:23 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:58:23 localhost podman[305365]: 2025-11-28 09:58:23.228092086 +0000 UTC m=+0.319557071 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:58:23 localhost podman[305365]: 2025-11-28 09:58:23.236940933 +0000 UTC m=+0.328405958 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:58:23 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:58:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s Nov 28 04:58:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:58:25 localhost podman[305448]: 2025-11-28 09:58:25.981145405 +0000 UTC m=+0.090381248 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:58:25 localhost podman[305448]: 2025-11-28 09:58:25.992481158 +0000 UTC m=+0.101717061 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:58:26 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:58:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:27 localhost openstack_network_exporter[240973]: ERROR 09:58:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:58:27 localhost openstack_network_exporter[240973]: ERROR 09:58:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:58:27 localhost openstack_network_exporter[240973]: ERROR 09:58:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:58:27 localhost openstack_network_exporter[240973]: ERROR 09:58:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:58:27 localhost openstack_network_exporter[240973]: Nov 28 04:58:27 localhost openstack_network_exporter[240973]: ERROR 09:58:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:58:27 localhost openstack_network_exporter[240973]: Nov 28 04:58:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:58:28 localhost podman[239012]: time="2025-11-28T09:58:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:58:28 localhost podman[239012]: @ - - [28/Nov/2025:09:58:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:58:28 localhost podman[239012]: @ - - [28/Nov/2025:09:58:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19179 "" "Go-http-client/1.1" Nov 28 04:58:29 localhost podman[305472]: 2025-11-28 09:58:29.037817741 +0000 UTC m=+0.144237186 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd) Nov 28 04:58:29 localhost podman[305472]: 2025-11-28 09:58:29.053553076 +0000 UTC m=+0.159972521 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3) Nov 28 04:58:29 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:58:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:32 localhost nova_compute[280168]: 2025-11-28 09:58:32.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:33 localhost nova_compute[280168]: 2025-11-28 09:58:33.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:58:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 5064 writes, 22K keys, 5064 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5064 writes, 762 syncs, 6.65 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 214 writes, 503 keys, 214 commit groups, 1.0 writes per commit group, ingest: 0.48 MB, 0.00 MB/s#012Interval WAL: 214 writes, 101 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:58:35 localhost nova_compute[280168]: 2025-11-28 09:58:35.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:35 localhost nova_compute[280168]: 2025-11-28 09:58:35.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:58:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:58:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:58:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:58:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:58:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:58:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:58:36 localhost nova_compute[280168]: 2025-11-28 09:58:36.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:36 localhost nova_compute[280168]: 2025-11-28 09:58:36.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:58:36 localhost nova_compute[280168]: 2025-11-28 09:58:36.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:58:36 localhost nova_compute[280168]: 2025-11-28 09:58:36.443 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:58:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:37 localhost nova_compute[280168]: 2025-11-28 09:58:37.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:37 localhost nova_compute[280168]: 2025-11-28 09:58:37.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 04:58:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.2 total, 600.0 interval#012Cumulative writes: 5888 writes, 25K keys, 5888 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s#012Cumulative WAL: 5888 writes, 780 syncs, 7.55 writes per sync, written: 0.02 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 34 writes, 127 keys, 34 commit groups, 1.0 writes per commit group, ingest: 0.22 MB, 0.00 MB/s#012Interval WAL: 34 writes, 17 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 04:58:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.257 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.258 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.258 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.258 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.259 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:58:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:58:40 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2543969539' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.712 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.909 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.910 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11996MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.911 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.911 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.975 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.975 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:58:40 localhost nova_compute[280168]: 2025-11-28 09:58:40.990 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:58:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:58:41 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1985352112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:58:41 localhost nova_compute[280168]: 2025-11-28 09:58:41.408 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:58:41 localhost nova_compute[280168]: 2025-11-28 09:58:41.414 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:58:41 localhost nova_compute[280168]: 2025-11-28 09:58:41.428 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:58:41 localhost nova_compute[280168]: 2025-11-28 09:58:41.431 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:58:41 localhost nova_compute[280168]: 2025-11-28 09:58:41.431 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:58:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:42 localhost nova_compute[280168]: 2025-11-28 09:58:42.433 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:58:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:58:43 localhost systemd[1]: tmp-crun.HtS9eP.mount: Deactivated successfully. Nov 28 04:58:43 localhost podman[305535]: 2025-11-28 09:58:43.961813177 +0000 UTC m=+0.070301648 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.expose-services=) Nov 28 04:58:43 localhost podman[305535]: 2025-11-28 09:58:43.97843053 +0000 UTC m=+0.086918991 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 28 04:58:43 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:58:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:58:50.843 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:58:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:58:50.844 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:58:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:58:50.844 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:58:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:58:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:58:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:58:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:58:53 localhost podman[305556]: 2025-11-28 09:58:53.982415863 +0000 UTC m=+0.084747766 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 04:58:54 localhost podman[305555]: 2025-11-28 09:58:54.08903933 +0000 UTC m=+0.197437056 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team) Nov 28 04:58:54 localhost podman[305563]: 2025-11-28 09:58:54.053901097 +0000 UTC m=+0.148390932 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 04:58:54 localhost podman[305555]: 2025-11-28 09:58:54.127635588 +0000 UTC m=+0.236033324 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute) Nov 28 04:58:54 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:58:54 localhost podman[305557]: 2025-11-28 09:58:54.157922405 +0000 UTC m=+0.254710080 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 04:58:54 localhost podman[305557]: 2025-11-28 09:58:54.167557646 +0000 UTC m=+0.264345381 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible) Nov 28 04:58:54 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:58:54 localhost podman[305563]: 2025-11-28 09:58:54.18388184 +0000 UTC m=+0.278371655 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 04:58:54 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:58:54 localhost podman[305556]: 2025-11-28 09:58:54.208766343 +0000 UTC m=+0.311098316 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0) Nov 28 04:58:54 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:58:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:58:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:58:56 localhost podman[305635]: 2025-11-28 09:58:56.980305141 +0000 UTC m=+0.085671653 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:58:56 localhost podman[305635]: 2025-11-28 09:58:56.994389807 +0000 UTC m=+0.099756299 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 04:58:57 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:58:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:57 localhost openstack_network_exporter[240973]: ERROR 09:58:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:58:57 localhost openstack_network_exporter[240973]: ERROR 09:58:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:58:57 localhost openstack_network_exporter[240973]: ERROR 09:58:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:58:57 localhost openstack_network_exporter[240973]: ERROR 09:58:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:58:57 localhost openstack_network_exporter[240973]: Nov 28 04:58:57 localhost openstack_network_exporter[240973]: ERROR 09:58:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:58:57 localhost openstack_network_exporter[240973]: Nov 28 04:58:58 localhost podman[239012]: time="2025-11-28T09:58:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:58:58 localhost podman[239012]: @ - - [28/Nov/2025:09:58:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:58:58 localhost podman[239012]: @ - - [28/Nov/2025:09:58:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19184 "" "Go-http-client/1.1" Nov 28 04:58:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:58:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:58:59 localhost podman[305658]: 2025-11-28 09:58:59.983972404 +0000 UTC m=+0.092833751 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2) Nov 28 04:59:00 localhost podman[305658]: 2025-11-28 09:59:00.022613682 +0000 UTC m=+0.131474989 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 04:59:00 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:59:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_09:59:05 Nov 28 04:59:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 04:59:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 04:59:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_metadata', 'images', 'volumes', 'backups', '.mgr', 'vms', 'manila_data'] Nov 28 04:59:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 04:59:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32) Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 04:59:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16) Nov 28 04:59:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:59:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:59:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:59:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:59:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:59:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 04:59:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 04:59:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 04:59:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 04:59:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 04:59:13 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:59:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 04:59:13 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev e66e0e5b-c775-4708-a26b-b94b8c033107 (Updating node-proxy deployment (+3 -> 3)) Nov 28 04:59:13 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev e66e0e5b-c775-4708-a26b-b94b8c033107 (Updating node-proxy deployment (+3 -> 3)) Nov 28 04:59:13 localhost ceph-mgr[286188]: [progress INFO root] Completed event e66e0e5b-c775-4708-a26b-b94b8c033107 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 04:59:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 04:59:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 04:59:14 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 04:59:14 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:59:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:59:14 localhost systemd[1]: tmp-crun.yCiBRz.mount: Deactivated successfully. Nov 28 04:59:14 localhost podman[305764]: 2025-11-28 09:59:14.991793917 +0000 UTC m=+0.095690267 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 04:59:15 localhost podman[305764]: 2025-11-28 09:59:15.032542739 +0000 UTC m=+0.136439089 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7) Nov 28 04:59:15 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:59:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:15 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 04:59:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 04:59:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 04:59:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0. Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.809484) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323962809575, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2566, "num_deletes": 253, "total_data_size": 5852505, "memory_usage": 6144320, "flush_reason": "Manual Compaction"} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323962838397, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3664999, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12857, "largest_seqno": 15418, "table_properties": {"data_size": 3655199, "index_size": 6049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22841, "raw_average_key_size": 21, "raw_value_size": 3634616, "raw_average_value_size": 3451, "num_data_blocks": 261, "num_entries": 1053, "num_filter_entries": 1053, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323814, "oldest_key_time": 1764323814, "file_creation_time": 1764323962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 28969 microseconds, and 8142 cpu microseconds. Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.838459) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3664999 bytes OK Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.838485) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.840630) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.840655) EVENT_LOG_v1 {"time_micros": 1764323962840648, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.840678) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5840674, prev total WAL file size 5840674, number of live WAL files 2. Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.842004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end) Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3579KB)], [15(17MB)] Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323962842058, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 21675810, "oldest_snapshot_seqno": -1} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 12155 keys, 18737846 bytes, temperature: kUnknown Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323962949422, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 18737846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18667031, "index_size": 39354, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30405, "raw_key_size": 324291, "raw_average_key_size": 26, "raw_value_size": 18458709, "raw_average_value_size": 1518, "num_data_blocks": 1508, "num_entries": 12155, "num_filter_entries": 12155, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764323962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.949749) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 18737846 bytes Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.951721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.8 rd, 174.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 17.2 +0.0 blob) out(17.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 12692, records dropped: 537 output_compression: NoCompression Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.951758) EVENT_LOG_v1 {"time_micros": 1764323962951742, "job": 6, "event": "compaction_finished", "compaction_time_micros": 107425, "compaction_time_cpu_micros": 51504, "output_level": 6, "num_output_files": 1, "total_output_size": 18737846, "num_input_records": 12692, "num_output_records": 12155, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323962952653, "job": 6, "event": "table_file_deletion", "file_number": 17} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764323962956350, "job": 6, "event": "table_file_deletion", "file_number": 15} Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.841868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.956513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.956521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.956524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.956527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:59:22 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-09:59:22.956530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 04:59:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:59:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:59:24 localhost podman[305784]: 2025-11-28 09:59:24.986798149 +0000 UTC m=+0.089076177 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3) Nov 28 04:59:25 localhost podman[305784]: 2025-11-28 09:59:25.023779798 +0000 UTC m=+0.126057886 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.schema-version=1.0) Nov 28 04:59:25 localhost systemd[1]: tmp-crun.MN1w9F.mount: Deactivated successfully. Nov 28 04:59:25 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:59:25 localhost podman[305795]: 2025-11-28 09:59:25.040470833 +0000 UTC m=+0.133998866 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:59:25 localhost podman[305795]: 2025-11-28 09:59:25.053377663 +0000 UTC m=+0.146905716 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:59:25 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:59:25 localhost systemd[1]: tmp-crun.Srx31m.mount: Deactivated successfully. Nov 28 04:59:25 localhost podman[305786]: 2025-11-28 09:59:25.143305625 +0000 UTC m=+0.238602112 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:59:25 localhost podman[305786]: 2025-11-28 09:59:25.17420476 +0000 UTC m=+0.269501247 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Nov 28 04:59:25 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:59:25 localhost podman[305785]: 2025-11-28 09:59:25.195060131 +0000 UTC m=+0.296342519 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:59:25 localhost podman[305785]: 2025-11-28 09:59:25.2604921 +0000 UTC m=+0.361774538 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 04:59:25 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:59:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:27 localhost openstack_network_exporter[240973]: ERROR 09:59:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:59:27 localhost openstack_network_exporter[240973]: ERROR 09:59:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:59:27 localhost openstack_network_exporter[240973]: ERROR 09:59:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:59:27 localhost openstack_network_exporter[240973]: ERROR 09:59:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:59:27 localhost openstack_network_exporter[240973]: Nov 28 04:59:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:27 localhost openstack_network_exporter[240973]: ERROR 09:59:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:59:27 localhost openstack_network_exporter[240973]: Nov 28 04:59:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:59:27 localhost podman[305866]: 2025-11-28 09:59:27.981021234 +0000 UTC m=+0.088649883 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:59:27 localhost podman[305866]: 2025-11-28 09:59:27.994387859 +0000 UTC m=+0.102016498 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 04:59:28 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:59:28 localhost podman[239012]: time="2025-11-28T09:59:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:59:28 localhost podman[239012]: @ - - [28/Nov/2025:09:59:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:59:28 localhost podman[239012]: @ - - [28/Nov/2025:09:59:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19185 "" "Go-http-client/1.1" Nov 28 04:59:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 04:59:30 localhost podman[305889]: 2025-11-28 09:59:30.972386884 +0000 UTC m=+0.078852037 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:59:30 localhost podman[305889]: 2025-11-28 09:59:30.990528384 +0000 UTC m=+0.096993557 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 04:59:31 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 04:59:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:33 localhost nova_compute[280168]: 2025-11-28 09:59:33.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:35 localhost nova_compute[280168]: 2025-11-28 09:59:35.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:35 localhost nova_compute[280168]: 2025-11-28 09:59:35.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:35 localhost nova_compute[280168]: 2025-11-28 09:59:35.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 04:59:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 566 MiB used, 41 GiB / 42 GiB avail Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 28 04:59:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 28 04:59:36 localhost nova_compute[280168]: 2025-11-28 09:59:36.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Nov 28 04:59:38 localhost nova_compute[280168]: 2025-11-28 09:59:38.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:38 localhost nova_compute[280168]: 2025-11-28 09:59:38.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 04:59:38 localhost nova_compute[280168]: 2025-11-28 09:59:38.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 04:59:38 localhost nova_compute[280168]: 2025-11-28 09:59:38.264 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 04:59:38 localhost nova_compute[280168]: 2025-11-28 09:59:38.264 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:39 localhost nova_compute[280168]: 2025-11-28 09:59:39.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Nov 28 04:59:41 localhost ovn_metadata_agent[158525]: 2025-11-28 09:59:41.005 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 04:59:41 localhost ovn_metadata_agent[158525]: 2025-11-28 09:59:41.006 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 04:59:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Nov 28 04:59:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.233 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.260 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.260 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.260 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:59:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:59:42 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3275500434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.735 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.970 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.972 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=12018MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.973 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:59:42 localhost nova_compute[280168]: 2025-11-28 09:59:42.973 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.073 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.073 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.100 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 04:59:43 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 04:59:43 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/164218660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.540 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.546 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.560 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.561 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 04:59:43 localhost nova_compute[280168]: 2025-11-28 09:59:43.561 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:59:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Nov 28 04:59:44 localhost nova_compute[280168]: 2025-11-28 09:59:44.563 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 04:59:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 105 MiB data, 570 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s Nov 28 04:59:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 04:59:45 localhost podman[305952]: 2025-11-28 09:59:45.994046629 +0000 UTC m=+0.102508703 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, distribution-scope=public, name=ubi9-minimal) Nov 28 04:59:46 localhost podman[305952]: 2025-11-28 09:59:46.010591939 +0000 UTC m=+0.119054053 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350) Nov 28 04:59:46 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 04:59:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e93 e93: 6 total, 6 up, 6 in Nov 28 04:59:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:47 localhost ovn_metadata_agent[158525]: 2025-11-28 09:59:47.008 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 04:59:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 125 MiB data, 619 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s Nov 28 04:59:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 e94: 6 total, 6 up, 6 in Nov 28 04:59:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 125 MiB data, 619 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.6 MiB/s wr, 24 op/s Nov 28 04:59:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:59:50.843 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 04:59:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:59:50.844 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 04:59:50 localhost ovn_metadata_agent[158525]: 2025-11-28 09:59:50.844 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 04:59:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s Nov 28 04:59:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s Nov 28 04:59:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 4.3 MiB/s wr, 40 op/s Nov 28 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 04:59:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 04:59:55 localhost podman[305970]: 2025-11-28 09:59:55.974305923 +0000 UTC m=+0.082617090 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 28 04:59:55 localhost podman[305970]: 2025-11-28 09:59:55.988453971 +0000 UTC m=+0.096765158 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3) Nov 28 04:59:56 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 04:59:56 localhost podman[305971]: 2025-11-28 09:59:56.079891398 +0000 UTC m=+0.184265897 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 04:59:56 localhost podman[305972]: 2025-11-28 09:59:56.045240999 +0000 UTC m=+0.142360218 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 04:59:56 localhost podman[305972]: 2025-11-28 09:59:56.130488489 +0000 UTC m=+0.227607678 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 04:59:56 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 04:59:56 localhost podman[305978]: 2025-11-28 09:59:56.143711 +0000 UTC m=+0.239125948 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 04:59:56 localhost podman[305971]: 2025-11-28 09:59:56.147492504 +0000 UTC m=+0.251866963 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 04:59:56 localhost podman[305978]: 2025-11-28 09:59:56.157427154 +0000 UTC m=+0.252842142 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 04:59:56 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 04:59:56 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 04:59:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 04:59:57 localhost openstack_network_exporter[240973]: ERROR 09:59:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:59:57 localhost openstack_network_exporter[240973]: ERROR 09:59:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 04:59:57 localhost openstack_network_exporter[240973]: ERROR 09:59:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 04:59:57 localhost openstack_network_exporter[240973]: ERROR 09:59:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 04:59:57 localhost openstack_network_exporter[240973]: Nov 28 04:59:57 localhost openstack_network_exporter[240973]: ERROR 09:59:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 04:59:57 localhost openstack_network_exporter[240973]: Nov 28 04:59:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 18 op/s Nov 28 04:59:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 04:59:58 localhost podman[239012]: time="2025-11-28T09:59:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 04:59:58 localhost podman[239012]: @ - - [28/Nov/2025:09:59:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 04:59:58 localhost podman[239012]: @ - - [28/Nov/2025:09:59:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19186 "" "Go-http-client/1.1" Nov 28 04:59:59 localhost podman[306054]: 2025-11-28 09:59:59.027135513 +0000 UTC m=+0.135306645 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:59:59 localhost podman[306054]: 2025-11-28 09:59:59.039523407 +0000 UTC m=+0.147694599 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 04:59:59 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 04:59:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 16 op/s Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.625 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:00:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:00:00 localhost ceph-mon[301134]: overall HEALTH_OK Nov 28 05:00:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 15 op/s Nov 28 05:00:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:00:01 localhost podman[306079]: 2025-11-28 10:00:01.975243834 +0000 UTC m=+0.082893699 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125) Nov 28 05:00:01 localhost podman[306079]: 2025-11-28 10:00:01.990681941 +0000 UTC m=+0.098331856 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:00:02 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:00:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:00:05 Nov 28 05:00:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:00:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:00:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'manila_data', 'backups', 'manila_metadata', 'vms'] Nov 28 05:00:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:00:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:00:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:00:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 28 05:00:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:00:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:00:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:00:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:00:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:00:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:14 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:14.685 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:00:13Z, description=, device_id=b12e58f4-c600-4789-9e60-18c753a08ff6, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e1feac1c-84ce-4045-8255-738ae05741b8, ip_allocation=immediate, mac_address=fa:16:3e:cd:76:d8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=114, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:00:14Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0. Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.802668) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324014802724, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 852, "num_deletes": 256, "total_data_size": 1242869, "memory_usage": 1266192, "flush_reason": "Manual Compaction"} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324014810222, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 816319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15424, "largest_seqno": 16270, "table_properties": {"data_size": 812648, "index_size": 1462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8362, "raw_average_key_size": 18, "raw_value_size": 805113, "raw_average_value_size": 1821, "num_data_blocks": 65, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323963, "oldest_key_time": 1764323963, "file_creation_time": 1764324014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 7579 microseconds, and 2143 cpu microseconds. Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.810254) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 816319 bytes OK Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.810272) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.813883) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.813898) EVENT_LOG_v1 {"time_micros": 1764324014813893, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.813914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1238461, prev total WAL file size 1238785, number of live WAL files 2. Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.815054) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373730' seq:72057594037927935, type:22 .. '6C6F676D0034303232' seq:0, type:0; will stop at (end) Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(797KB)], [18(17MB)] Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324014815092, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 19554165, "oldest_snapshot_seqno": -1} Nov 28 05:00:14 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:00:14 localhost podman[306182]: 2025-11-28 10:00:14.910456128 +0000 UTC m=+0.059075659 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:00:14 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:14 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:00:14 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:00:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:00:14 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 12065 keys, 19455235 bytes, temperature: kUnknown Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324014957208, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 19455235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19383598, "index_size": 40368, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 323387, "raw_average_key_size": 26, "raw_value_size": 19175396, "raw_average_value_size": 1589, "num_data_blocks": 1549, "num_entries": 12065, "num_filter_entries": 12065, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324014, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:00:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.957437) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 19455235 bytes Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.960565) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.5 rd, 136.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 17.9 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(47.8) write-amplify(23.8) OK, records in: 12597, records dropped: 532 output_compression: NoCompression Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.960598) EVENT_LOG_v1 {"time_micros": 1764324014960583, "job": 8, "event": "compaction_finished", "compaction_time_micros": 142183, "compaction_time_cpu_micros": 39584, "output_level": 6, "num_output_files": 1, "total_output_size": 19455235, "num_input_records": 12597, "num_output_records": 12065, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324014961131, "job": 8, "event": "table_file_deletion", "file_number": 20} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324014964455, "job": 8, "event": "table_file_deletion", "file_number": 18} Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.814976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.964622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.964632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.964636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.964641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:00:14 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:00:14.964645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:00:14 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev d57516a0-7dc2-4aaf-ba81-14e3a5c978fb (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:00:14 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev d57516a0-7dc2-4aaf-ba81-14e3a5c978fb (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:00:14 localhost ceph-mgr[286188]: [progress INFO root] Completed event d57516a0-7dc2-4aaf-ba81-14e3a5c978fb (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:00:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:00:14 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:00:14 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:00:14 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:00:15 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:15.151 261346 INFO neutron.agent.dhcp.agent [None req-9530903c-83e5-45c2-9004-e11a1d9be55f - - - - - -] DHCP configuration for ports {'e1feac1c-84ce-4045-8255-738ae05741b8'} is completed#033[00m Nov 28 05:00:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:15 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:00:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:00:15 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:00:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:00:16 localhost podman[306223]: 2025-11-28 10:00:16.982614881 +0000 UTC m=+0.087998123 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc.) Nov 28 05:00:17 localhost podman[306223]: 2025-11-28 10:00:17.000561255 +0000 UTC m=+0.105944487 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 05:00:17 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:00:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:22.880 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:00:22Z, description=, device_id=76efbe69-508f-4a6c-bc6c-575aca933da7, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b4af8288-f645-4ff4-99bf-dd2772bb45d9, ip_allocation=immediate, mac_address=fa:16:3e:98:54:59, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=188, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:00:22Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:00:23 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:00:23 localhost podman[306259]: 2025-11-28 10:00:23.089525859 +0000 UTC m=+0.063274065 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:00:23 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:23 localhost systemd[297255]: Created slice User Background Tasks Slice. Nov 28 05:00:23 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:23 localhost systemd[297255]: Starting Cleanup of User's Temporary Files and Directories... Nov 28 05:00:23 localhost systemd[297255]: Finished Cleanup of User's Temporary Files and Directories. Nov 28 05:00:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:23.332 261346 INFO neutron.agent.dhcp.agent [None req-29033f13-0547-4c9e-8569-9f9799d316d6 - - - - - -] DHCP configuration for ports {'b4af8288-f645-4ff4-99bf-dd2772bb45d9'} is completed#033[00m Nov 28 05:00:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:00:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:00:26 localhost systemd[1]: tmp-crun.mCFX8N.mount: Deactivated successfully. Nov 28 05:00:26 localhost podman[306285]: 2025-11-28 10:00:26.990286278 +0000 UTC m=+0.087847570 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 05:00:27 localhost podman[306285]: 2025-11-28 10:00:27.02341869 +0000 UTC m=+0.120979992 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:00:27 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:00:27 localhost podman[306284]: 2025-11-28 10:00:27.039436664 +0000 UTC m=+0.136999076 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:00:27 localhost podman[306284]: 2025-11-28 10:00:27.085467888 +0000 UTC m=+0.183030250 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2) Nov 28 05:00:27 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:00:27 localhost podman[306283]: 2025-11-28 10:00:27.102507623 +0000 UTC m=+0.201649253 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:00:27 localhost podman[306283]: 2025-11-28 10:00:27.140531863 +0000 UTC m=+0.239673463 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:00:27 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:00:27 localhost podman[306286]: 2025-11-28 10:00:27.145875355 +0000 UTC m=+0.240286392 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:00:27 localhost podman[306286]: 2025-11-28 10:00:27.229523997 +0000 UTC m=+0.323934954 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:00:27 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:00:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:27.445 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:00:27Z, description=, device_id=cd2a6086-9326-4cdd-a015-4768e2092068, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=54e1a6ba-02cc-4dd7-8b89-abb02b7f636f, ip_allocation=immediate, mac_address=fa:16:3e:2d:ce:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=236, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:00:27Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:00:27 localhost openstack_network_exporter[240973]: ERROR 10:00:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:00:27 localhost openstack_network_exporter[240973]: ERROR 10:00:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:00:27 localhost openstack_network_exporter[240973]: ERROR 10:00:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:00:27 localhost openstack_network_exporter[240973]: ERROR 10:00:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:00:27 localhost openstack_network_exporter[240973]: Nov 28 05:00:27 localhost openstack_network_exporter[240973]: ERROR 10:00:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:00:27 localhost openstack_network_exporter[240973]: Nov 28 05:00:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:27 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:00:27 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:27 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:27 localhost podman[306383]: 2025-11-28 10:00:27.665860651 +0000 UTC m=+0.059053048 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:00:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:27.890 261346 INFO neutron.agent.dhcp.agent [None req-d06bab2e-baa5-41c1-aede-fe332e955b2e - - - - - -] DHCP configuration for ports {'54e1a6ba-02cc-4dd7-8b89-abb02b7f636f'} is completed#033[00m Nov 28 05:00:28 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:28.463 2 INFO neutron.agent.securitygroups_rpc [None req-cdabab7f-6f0b-439c-ae1d-dd4a3208cd11 c64867c2bac34a819c0995d0b72ee9a7 3e4b394501d24dc7954ec5d2f27b8081 - - default default] Security group member updated ['dc0a6e12-205a-4d7d-adb2-6545f08f7990']#033[00m Nov 28 05:00:28 localhost podman[239012]: time="2025-11-28T10:00:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:00:28 localhost podman[239012]: @ - - [28/Nov/2025:10:00:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:00:28 localhost podman[239012]: @ - - [28/Nov/2025:10:00:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19186 "" "Go-http-client/1.1" Nov 28 05:00:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:00:29 localhost podman[306403]: 2025-11-28 10:00:29.983881844 +0000 UTC m=+0.084848968 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:00:29 localhost podman[306403]: 2025-11-28 10:00:29.994481045 +0000 UTC m=+0.095448209 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:00:30 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:00:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:32 localhost ovn_controller[152726]: 2025-11-28T10:00:32Z|00039|memory|INFO|peak resident set size grew 53% in last 2237.5 seconds, from 13040 kB to 19928 kB Nov 28 05:00:32 localhost ovn_controller[152726]: 2025-11-28T10:00:32Z|00040|memory|INFO|idl-cells-OVN_Southbound:7680 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:209 lflow-cache-entries-cache-matches:236 lflow-cache-size-KB:803 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:361 ofctrl_installed_flow_usage-KB:265 ofctrl_sb_flow_ref_usage-KB:140 Nov 28 05:00:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:00:32 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:32.900 2 INFO neutron.agent.securitygroups_rpc [None req-d73a2eae-adce-4d72-91ae-34d59db28d8a c64867c2bac34a819c0995d0b72ee9a7 3e4b394501d24dc7954ec5d2f27b8081 - - default default] Security group member updated ['dc0a6e12-205a-4d7d-adb2-6545f08f7990']#033[00m Nov 28 05:00:32 localhost systemd[1]: tmp-crun.N4fTHe.mount: Deactivated successfully. Nov 28 05:00:32 localhost podman[306426]: 2025-11-28 10:00:32.980484793 +0000 UTC m=+0.077641370 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:00:33 localhost podman[306426]: 2025-11-28 10:00:33.017882415 +0000 UTC m=+0.115038972 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:00:33 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:00:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:35.016 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:00:34Z, description=, device_id=5e2bdb5c-9386-4f23-88bb-b64884bb41d1, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3cbad716-1cc4-4dc7-a41c-a00dc20f804b, ip_allocation=immediate, mac_address=fa:16:3e:9a:66:d9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=314, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:00:34Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:00:35 localhost nova_compute[280168]: 2025-11-28 10:00:35.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:35 localhost nova_compute[280168]: 2025-11-28 10:00:35.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:35 localhost nova_compute[280168]: 2025-11-28 10:00:35.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:00:35 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:00:35 localhost podman[306462]: 2025-11-28 10:00:35.2444066 +0000 UTC m=+0.057572382 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Nov 28 05:00:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:35.661 261346 INFO neutron.agent.dhcp.agent [None req-a77b74cd-c8e7-437b-87b5-52b5da6d1c28 - - - - - -] DHCP configuration for ports {'3cbad716-1cc4-4dc7-a41c-a00dc20f804b'} is completed#033[00m Nov 28 05:00:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:00:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:00:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:00:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:00:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:00:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:00:36 localhost nova_compute[280168]: 2025-11-28 10:00:36.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:36 localhost nova_compute[280168]: 2025-11-28 10:00:36.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 05:00:36 localhost nova_compute[280168]: 2025-11-28 10:00:36.265 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 05:00:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:37 localhost nova_compute[280168]: 2025-11-28 10:00:37.264 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:37 localhost podman[306499]: 2025-11-28 10:00:37.686462318 +0000 UTC m=+0.062850293 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:00:37 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:00:37 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:37 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:38 localhost nova_compute[280168]: 2025-11-28 10:00:38.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:40 localhost nova_compute[280168]: 2025-11-28 10:00:40.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:40 localhost nova_compute[280168]: 2025-11-28 10:00:40.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:00:40 localhost nova_compute[280168]: 2025-11-28 10:00:40.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:00:40 localhost nova_compute[280168]: 2025-11-28 10:00:40.262 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:00:41 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:41.124 2 INFO neutron.agent.securitygroups_rpc [req-a368a1e6-562c-4526-ae3a-0e69b2a15bca req-d778405e-6e70-4ce8-a2a0-d781e7c8b4de 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['f59fc346-b907-4d25-9b54-7ce550f4338f']#033[00m Nov 28 05:00:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:00:41.153 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:00:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:00:41.157 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:00:41 localhost nova_compute[280168]: 2025-11-28 10:00:41.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:42 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:42.144 2 INFO neutron.agent.securitygroups_rpc [req-91a233d2-d128-4e77-ba4a-bffdfb538e2c req-2dda1abe-8246-445d-82ca-7050072f5ab7 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['793a871e-42bd-4871-9764-ed4c16f282ee']#033[00m Nov 28 05:00:42 localhost nova_compute[280168]: 2025-11-28 10:00:42.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:43 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:43.151 2 INFO neutron.agent.securitygroups_rpc [req-9f8c831a-ecf0-4e4e-9595-719ef0ba964a req-7a744279-94cd-4894-9a01-f869bc00b409 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['03578922-528e-499a-8e7e-7a5c262d5e64']#033[00m Nov 28 05:00:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:00:43.160 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.260 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.260 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.261 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:00:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:43 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:00:43 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2762372881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.728 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.926 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.927 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11978MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.928 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:00:43 localhost nova_compute[280168]: 2025-11-28 10:00:43.929 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.113 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.114 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.371 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.402 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.403 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.703 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.742 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 05:00:44 localhost nova_compute[280168]: 2025-11-28 10:00:44.771 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:00:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:44.828 2 INFO neutron.agent.securitygroups_rpc [req-cee9d54d-2061-4e18-9dbb-7978cc78c723 req-de7c358c-da89-4568-8167-d43d2e6c2b50 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['0b748e56-a20d-4a74-8688-d245ea875072']#033[00m Nov 28 05:00:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:00:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2207683770' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:00:45 localhost nova_compute[280168]: 2025-11-28 10:00:45.288 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:00:45 localhost nova_compute[280168]: 2025-11-28 10:00:45.295 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:00:45 localhost nova_compute[280168]: 2025-11-28 10:00:45.316 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:00:45 localhost nova_compute[280168]: 2025-11-28 10:00:45.319 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:00:45 localhost nova_compute[280168]: 2025-11-28 10:00:45.319 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:00:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 693 MiB used, 41 GiB / 42 GiB avail Nov 28 05:00:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:46.164 2 INFO neutron.agent.securitygroups_rpc [req-fb04029d-a55f-44ca-91cb-e1804a48ba9f req-358dd77e-4f38-41f7-8d87-d246c9bdd01f 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['e4169f5e-6c1f-4c53-9aba-f1f5fa41bfad']#033[00m Nov 28 05:00:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:46.821 2 INFO neutron.agent.securitygroups_rpc [req-587359a2-3e51-4204-a146-11784139df1a req-865ecdad-325d-46ee-b040-9c64055b894f 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['e4169f5e-6c1f-4c53-9aba-f1f5fa41bfad']#033[00m Nov 28 05:00:47 localhost neutron_sriov_agent[254415]: 2025-11-28 10:00:47.110 2 INFO neutron.agent.securitygroups_rpc [req-95a69005-5e60-4e46-b43c-b0ec65f622be req-ea491abe-a44c-4d6d-8b8a-511053fe8f93 67aa4e1531db4854a2788f0bbeaba314 40693e6dadaf448a8cb4caeb6899effc - - default default] Security group rule updated ['e4169f5e-6c1f-4c53-9aba-f1f5fa41bfad']#033[00m Nov 28 05:00:47 localhost nova_compute[280168]: 2025-11-28 10:00:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s Nov 28 05:00:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:00:47 localhost podman[306564]: 2025-11-28 10:00:47.98248462 +0000 UTC m=+0.085827357 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 05:00:48 localhost podman[306564]: 2025-11-28 10:00:48.021101199 +0000 UTC m=+0.124443936 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 28 05:00:48 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:00:48 localhost nova_compute[280168]: 2025-11-28 10:00:48.344 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:00:48 localhost nova_compute[280168]: 2025-11-28 10:00:48.345 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 05:00:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s Nov 28 05:00:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:00:50.844 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:00:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:00:50.845 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:00:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:00:50.846 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:00:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 80 op/s Nov 28 05:00:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:52 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:00:52 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:52 localhost podman[306602]: 2025-11-28 10:00:52.495250596 +0000 UTC m=+0.084335513 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 28 05:00:52 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:52.608 261346 INFO neutron.agent.dhcp.agent [None req-346e1842-d854-4979-83dc-6f928a59e0e2 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:00:52Z, description=, device_id=7dd8143d-bf72-4ceb-a8e4-27c584dc6e09, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=086df021-4554-4cc2-b965-7ba5df63e5cc, ip_allocation=immediate, mac_address=fa:16:3e:2a:ea:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=450, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:00:52Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:00:52 localhost podman[306640]: 2025-11-28 10:00:52.851179447 +0000 UTC m=+0.060650906 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:00:52 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:00:52 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:00:52 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:00:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:00:53.201 261346 INFO neutron.agent.dhcp.agent [None req-a9d7f79b-7a99-4343-b36b-5790d64bbd21 - - - - - -] DHCP configuration for ports {'086df021-4554-4cc2-b965-7ba5df63e5cc'} is completed#033[00m Nov 28 05:00:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 80 op/s Nov 28 05:00:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 80 op/s Nov 28 05:00:55 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e95 e95: 6 total, 6 up, 6 in Nov 28 05:00:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:00:57 localhost openstack_network_exporter[240973]: ERROR 10:00:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:00:57 localhost openstack_network_exporter[240973]: ERROR 10:00:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:00:57 localhost openstack_network_exporter[240973]: ERROR 10:00:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:00:57 localhost openstack_network_exporter[240973]: ERROR 10:00:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:00:57 localhost openstack_network_exporter[240973]: Nov 28 05:00:57 localhost openstack_network_exporter[240973]: ERROR 10:00:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:00:57 localhost openstack_network_exporter[240973]: Nov 28 05:00:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 103 op/s Nov 28 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:00:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:00:57 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e96 e96: 6 total, 6 up, 6 in Nov 28 05:00:58 localhost podman[306661]: 2025-11-28 10:00:58.03562702 +0000 UTC m=+0.135590534 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:00:58 localhost systemd[1]: tmp-crun.JwZH0W.mount: Deactivated successfully. Nov 28 05:00:58 localhost podman[306662]: 2025-11-28 10:00:58.048143639 +0000 UTC m=+0.144575356 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 28 05:00:58 localhost podman[306662]: 2025-11-28 10:00:58.057635837 +0000 UTC m=+0.154067574 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:00:58 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:00:58 localhost podman[306663]: 2025-11-28 10:00:58.098503203 +0000 UTC m=+0.188703521 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:00:58 localhost podman[306663]: 2025-11-28 10:00:58.106969699 +0000 UTC m=+0.197170057 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:00:58 localhost podman[306661]: 2025-11-28 10:00:58.115997603 +0000 UTC m=+0.215961167 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller) Nov 28 05:00:58 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:00:58 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:00:58 localhost podman[306660]: 2025-11-28 10:00:58.191516957 +0000 UTC m=+0.294046238 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:00:58 localhost podman[306660]: 2025-11-28 10:00:58.20578789 +0000 UTC m=+0.308317191 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0) Nov 28 05:00:58 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:00:58 localhost podman[239012]: time="2025-11-28T10:00:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:00:58 localhost podman[239012]: @ - - [28/Nov/2025:10:00:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:00:58 localhost podman[239012]: @ - - [28/Nov/2025:10:00:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19185 "" "Go-http-client/1.1" Nov 28 05:00:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 2.2 KiB/s wr, 59 op/s Nov 28 05:01:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:01:00 localhost podman[306743]: 2025-11-28 10:01:00.972071438 +0000 UTC m=+0.077688562 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:01:01 localhost podman[306743]: 2025-11-28 10:01:01.006015875 +0000 UTC m=+0.111632969 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:01:01 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:01:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 4.0 KiB/s wr, 99 op/s Nov 28 05:01:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.257 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "7292509e-f294-4159-96e5-22d4712df2a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.257 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.279 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.359 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.360 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.365 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.366 280172 INFO nova.compute.claims [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Claim successful on node np0005538515.localdomain#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.487 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:01:02 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3260841447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.924 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.931 280172 DEBUG nova.compute.provider_tree [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.949 280172 DEBUG nova.scheduler.client.report [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.977 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:02 localhost nova_compute[280168]: 2025-11-28 10:01:02.978 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.067 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.090 280172 INFO nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.108 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.215 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.217 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.218 280172 INFO nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Creating image(s)#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.254 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.293 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.331 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.335 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "1d475a5fe6866c2fa864abfa6db335a58fd8123d" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:03 localhost nova_compute[280168]: 2025-11-28 10:01:03.337 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "1d475a5fe6866c2fa864abfa6db335a58fd8123d" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 4.0 KiB/s wr, 99 op/s Nov 28 05:01:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:03.788 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:03Z, description=, device_id=e8c7be5d-4204-4a6d-9f91-45b65598e58d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8b997639-c389-4236-b623-22a294d76e8e, ip_allocation=immediate, mac_address=fa:16:3e:36:94:3e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=514, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:01:03Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:01:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:01:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:03.912 261346 INFO neutron.agent.linux.ip_lib [None req-26c8f880-99f6-42d3-a916-e5fbef56693d - - - - - -] Device tapd0a70cfb-41 cannot be used as it has no MAC address#033[00m Nov 28 05:01:03 localhost podman[306858]: 2025-11-28 10:01:03.93012128 +0000 UTC m=+0.090552121 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:01:03 localhost systemd[1]: tmp-crun.ftvlsS.mount: Deactivated successfully. Nov 28 05:01:03 localhost kernel: device tapd0a70cfb-41 entered promiscuous mode Nov 28 05:01:03 localhost podman[306858]: 2025-11-28 10:01:03.944496375 +0000 UTC m=+0.104927246 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 28 05:01:03 localhost NetworkManager[5965]: [1764324063.9457] manager: (tapd0a70cfb-41): new Generic device (/org/freedesktop/NetworkManager/Devices/15) Nov 28 05:01:03 localhost ovn_controller[152726]: 2025-11-28T10:01:03Z|00041|binding|INFO|Claiming lport d0a70cfb-41f8-4ab9-819b-560a898e8329 for this chassis. Nov 28 05:01:03 localhost ovn_controller[152726]: 2025-11-28T10:01:03Z|00042|binding|INFO|d0a70cfb-41f8-4ab9-819b-560a898e8329: Claiming unknown Nov 28 05:01:03 localhost systemd-udevd[306896]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:01:03 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:03.959 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-4feac402-945d-4d17-a15d-c8337ea9c266', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4feac402-945d-4d17-a15d-c8337ea9c266', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1bee3918a2345388c202f74e60af9c5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a3868fc-e35e-44db-9bd3-f12a417ed185, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0a70cfb-41f8-4ab9-819b-560a898e8329) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:03 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:03.961 158530 INFO neutron.agent.ovn.metadata.agent [-] Port d0a70cfb-41f8-4ab9-819b-560a898e8329 in datapath 4feac402-945d-4d17-a15d-c8337ea9c266 bound to our chassis#033[00m Nov 28 05:01:03 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:03.964 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port a7005248-9fa7-4afe-ae28-d6f6bbb69c02 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:01:03 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:03.965 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4feac402-945d-4d17-a15d-c8337ea9c266, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:01:03 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:03.966 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb503cf-76be-499e-b7c0-2e546539f4a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:03 localhost journal[228057]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, ) Nov 28 05:01:03 localhost journal[228057]: hostname: np0005538515.localdomain Nov 28 05:01:03 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:03 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:03 localhost ovn_controller[152726]: 2025-11-28T10:01:03Z|00043|binding|INFO|Setting lport d0a70cfb-41f8-4ab9-819b-560a898e8329 ovn-installed in OVS Nov 28 05:01:03 localhost ovn_controller[152726]: 2025-11-28T10:01:03Z|00044|binding|INFO|Setting lport d0a70cfb-41f8-4ab9-819b-560a898e8329 up in Southbound Nov 28 05:01:03 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:03 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:03 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:04 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:04 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:04 localhost journal[228057]: ethtool ioctl error on tapd0a70cfb-41: No such device Nov 28 05:01:04 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:01:04 localhost podman[306902]: 2025-11-28 10:01:04.062193966 +0000 UTC m=+0.067941616 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:01:04 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:01:04 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:04 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:04 localhost nova_compute[280168]: 2025-11-28 10:01:04.137 280172 DEBUG nova.virt.libvirt.imagebackend [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Image locations are: [{'url': 'rbd://2c5417c9-00eb-57d5-a565-ddecbc7995c1/images/85968a96-5a0e-43a4-9c04-3954f640a7ed/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2c5417c9-00eb-57d5-a565-ddecbc7995c1/images/85968a96-5a0e-43a4-9c04-3954f640a7ed/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Nov 28 05:01:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:04.548 261346 INFO neutron.agent.dhcp.agent [None req-677340f8-c16e-4f0d-9c2e-d1a109d49412 - - - - - -] DHCP configuration for ports {'8b997639-c389-4236-b623-22a294d76e8e'} is completed#033[00m Nov 28 05:01:04 localhost systemd[1]: tmp-crun.96fcCp.mount: Deactivated successfully. Nov 28 05:01:04 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 e97: 6 total, 6 up, 6 in Nov 28 05:01:04 localhost podman[306989]: Nov 28 05:01:04 localhost podman[306989]: 2025-11-28 10:01:04.967909803 +0000 UTC m=+0.073783343 container create 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:01:05 localhost systemd[1]: Started libpod-conmon-421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a.scope. Nov 28 05:01:05 localhost systemd[1]: Started libcrun container. Nov 28 05:01:05 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a53b8d3f668b25246333f5de4a531a6e13da55d713122016f2fd29b9d52ffaf2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:01:05 localhost podman[306989]: 2025-11-28 10:01:04.938573546 +0000 UTC m=+0.044447076 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:01:05 localhost podman[306989]: 2025-11-28 10:01:05.046374148 +0000 UTC m=+0.152247688 container init 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:05 localhost podman[306989]: 2025-11-28 10:01:05.055094972 +0000 UTC m=+0.160968522 container start 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 05:01:05 localhost dnsmasq[307007]: started, version 2.85 cachesize 150 Nov 28 05:01:05 localhost dnsmasq[307007]: DNS service limited to local subnets Nov 28 05:01:05 localhost dnsmasq[307007]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:01:05 localhost dnsmasq[307007]: warning: no upstream servers configured Nov 28 05:01:05 localhost dnsmasq-dhcp[307007]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:01:05 localhost dnsmasq[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/addn_hosts - 0 addresses Nov 28 05:01:05 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/host Nov 28 05:01:05 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/opts Nov 28 05:01:05 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:05.195 261346 INFO neutron.agent.dhcp.agent [None req-35fa67ce-21ea-4a26-94d8-b7bd49186315 - - - - - -] DHCP configuration for ports {'3c0393c7-2178-4cfe-a769-33cf4168b6af'} is completed#033[00m Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.296 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.372 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.part --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.373 280172 DEBUG nova.virt.images [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] 85968a96-5a0e-43a4-9c04-3954f640a7ed was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.374 280172 DEBUG nova.privsep.utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.375 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.part /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:01:05 Nov 28 05:01:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:01:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:01:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_data', '.mgr', 'volumes', 'manila_metadata', 'backups', 'vms', 'images'] Nov 28 05:01:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:01:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 192 MiB data, 757 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 39 op/s Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.654 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.part /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.converted" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:05 localhost nova_compute[280168]: 2025-11-28 10:01:05.659 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:01:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:01:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:01:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004821470634422111 of space, bias 1.0, pg target 0.9642941268844222 quantized to 32 (current 32) Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32) Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:01:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001949853433835846 quantized to 16 (current 16) Nov 28 05:01:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:01:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:01:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:01:06 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:06.177 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:05Z, description=, device_id=e8c7be5d-4204-4a6d-9f91-45b65598e58d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ec6ddc27-5566-4943-ab8e-4ee78e9f615c, ip_allocation=immediate, mac_address=fa:16:3e:9b:df:fd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:01:01Z, description=, dns_domain=, id=4feac402-945d-4d17-a15d-c8337ea9c266, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1343972998-network, port_security_enabled=True, project_id=f1bee3918a2345388c202f74e60af9c5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22843, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=507, status=ACTIVE, subnets=['e34cc883-2097-4e00-b953-2cb3d3328eb3'], tags=[], tenant_id=f1bee3918a2345388c202f74e60af9c5, updated_at=2025-11-28T10:01:02Z, vlan_transparent=None, network_id=4feac402-945d-4d17-a15d-c8337ea9c266, port_security_enabled=False, project_id=f1bee3918a2345388c202f74e60af9c5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=530, status=DOWN, tags=[], tenant_id=f1bee3918a2345388c202f74e60af9c5, updated_at=2025-11-28T10:01:05Z on network 4feac402-945d-4d17-a15d-c8337ea9c266#033[00m Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.220 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d.converted --force-share --output=json" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.222 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "1d475a5fe6866c2fa864abfa6db335a58fd8123d" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 2.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.268 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.275 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d 7292509e-f294-4159-96e5-22d4712df2a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:06 localhost podman[307056]: 2025-11-28 10:01:06.375265171 +0000 UTC m=+0.056974605 container kill 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:01:06 localhost dnsmasq[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/addn_hosts - 1 addresses Nov 28 05:01:06 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/host Nov 28 05:01:06 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/opts Nov 28 05:01:06 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:06.699 261346 INFO neutron.agent.dhcp.agent [None req-9d600ab1-f8ff-401f-948e-6a47429145b8 - - - - - -] DHCP configuration for ports {'ec6ddc27-5566-4943-ab8e-4ee78e9f615c'} is completed#033[00m Nov 28 05:01:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.752 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/1d475a5fe6866c2fa864abfa6db335a58fd8123d 7292509e-f294-4159-96e5-22d4712df2a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.840 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] resizing rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m Nov 28 05:01:06 localhost nova_compute[280168]: 2025-11-28 10:01:06.981 280172 DEBUG nova.objects.instance [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.217 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.218 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Ensure instance console log exists: /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.219 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.219 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.220 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.223 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T09:59:44Z,direct_url=,disk_format='qcow2',id=85968a96-5a0e-43a4-9c04-3954f640a7ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9dda653c53224db086060962b0702694',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-11-28T09:59:46Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'size': 0, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '85968a96-5a0e-43a4-9c04-3954f640a7ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.230 280172 WARNING nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.234 280172 DEBUG nova.virt.libvirt.host [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Searching host: 'np0005538515.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.235 280172 DEBUG nova.virt.libvirt.host [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.238 280172 DEBUG nova.virt.libvirt.host [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Searching host: 'np0005538515.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.238 280172 DEBUG nova.virt.libvirt.host [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.239 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.240 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T09:59:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='98f289d4-5c06-4ab5-9089-7b580870d676',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-28T09:59:44Z,direct_url=,disk_format='qcow2',id=85968a96-5a0e-43a4-9c04-3954f640a7ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='9dda653c53224db086060962b0702694',properties=ImageMetaProps,protected=,size=21430272,status='active',tags=,updated_at=2025-11-28T09:59:46Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.241 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.241 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.242 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.242 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.243 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.243 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.244 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.245 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.245 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.245 280172 DEBUG nova.virt.hardware [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.251 280172 DEBUG nova.privsep.utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.252 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 175 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 3.8 MiB/s wr, 166 op/s Nov 28 05:01:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:01:07 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2931902013' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.711 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.746 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:07 localhost nova_compute[280168]: 2025-11-28 10:01:07.750 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:01:08 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3475554474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.250 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.254 280172 DEBUG nova.objects.instance [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.344 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] End _get_guest_xml xml= Nov 28 05:01:08 localhost nova_compute[280168]: 7292509e-f294-4159-96e5-22d4712df2a0 Nov 28 05:01:08 localhost nova_compute[280168]: instance-00000007 Nov 28 05:01:08 localhost nova_compute[280168]: 131072 Nov 28 05:01:08 localhost nova_compute[280168]: 1 Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: tempest-UnshelveToHostMultiNodesTest-server-650509197 Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:07 Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: 128 Nov 28 05:01:08 localhost nova_compute[280168]: 1 Nov 28 05:01:08 localhost nova_compute[280168]: 0 Nov 28 05:01:08 localhost nova_compute[280168]: 0 Nov 28 05:01:08 localhost nova_compute[280168]: 1 Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: tempest-UnshelveToHostMultiNodesTest-426973173-project-member Nov 28 05:01:08 localhost nova_compute[280168]: tempest-UnshelveToHostMultiNodesTest-426973173 Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: RDO Nov 28 05:01:08 localhost nova_compute[280168]: OpenStack Compute Nov 28 05:01:08 localhost nova_compute[280168]: 27.5.2-0.20250829104910.6f8decf.el9 Nov 28 05:01:08 localhost nova_compute[280168]: 7292509e-f294-4159-96e5-22d4712df2a0 Nov 28 05:01:08 localhost nova_compute[280168]: 7292509e-f294-4159-96e5-22d4712df2a0 Nov 28 05:01:08 localhost nova_compute[280168]: Virtual Machine Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: hvm Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: /dev/urandom Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: Nov 28 05:01:08 localhost nova_compute[280168]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.587 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.588 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.588 280172 INFO nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Using config drive#033[00m Nov 28 05:01:08 localhost nova_compute[280168]: 2025-11-28 10:01:08.617 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:08 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:08.774 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:05Z, description=, device_id=e8c7be5d-4204-4a6d-9f91-45b65598e58d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ec6ddc27-5566-4943-ab8e-4ee78e9f615c, ip_allocation=immediate, mac_address=fa:16:3e:9b:df:fd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:01:01Z, description=, dns_domain=, id=4feac402-945d-4d17-a15d-c8337ea9c266, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1343972998-network, port_security_enabled=True, project_id=f1bee3918a2345388c202f74e60af9c5, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=22843, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=507, status=ACTIVE, subnets=['e34cc883-2097-4e00-b953-2cb3d3328eb3'], tags=[], tenant_id=f1bee3918a2345388c202f74e60af9c5, updated_at=2025-11-28T10:01:02Z, vlan_transparent=None, network_id=4feac402-945d-4d17-a15d-c8337ea9c266, port_security_enabled=False, project_id=f1bee3918a2345388c202f74e60af9c5, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=530, status=DOWN, tags=[], tenant_id=f1bee3918a2345388c202f74e60af9c5, updated_at=2025-11-28T10:01:05Z on network 4feac402-945d-4d17-a15d-c8337ea9c266#033[00m Nov 28 05:01:08 localhost dnsmasq[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/addn_hosts - 1 addresses Nov 28 05:01:08 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/host Nov 28 05:01:08 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/opts Nov 28 05:01:08 localhost podman[307264]: 2025-11-28 10:01:08.993263463 +0000 UTC m=+0.070633198 container kill 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:01:08 localhost systemd[1]: tmp-crun.7rC0wW.mount: Deactivated successfully. Nov 28 05:01:09 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:09.551 261346 INFO neutron.agent.dhcp.agent [None req-914c1384-c4b0-487e-bfed-b49caeca73a9 - - - - - -] DHCP configuration for ports {'ec6ddc27-5566-4943-ab8e-4ee78e9f615c'} is completed#033[00m Nov 28 05:01:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 175 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.7 MiB/s wr, 162 op/s Nov 28 05:01:09 localhost nova_compute[280168]: 2025-11-28 10:01:09.799 280172 INFO nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Creating config drive at /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config#033[00m Nov 28 05:01:09 localhost nova_compute[280168]: 2025-11-28 10:01:09.806 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6474w8j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:09 localhost nova_compute[280168]: 2025-11-28 10:01:09.931 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_6474w8j" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:09 localhost nova_compute[280168]: 2025-11-28 10:01:09.965 280172 DEBUG nova.storage.rbd_utils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:09 localhost nova_compute[280168]: 2025-11-28 10:01:09.969 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config 7292509e-f294-4159-96e5-22d4712df2a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.172 280172 DEBUG oslo_concurrency.processutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config 7292509e-f294-4159-96e5-22d4712df2a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.173 280172 INFO nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Deleting local config drive /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config because it was imported into RBD.#033[00m Nov 28 05:01:10 localhost systemd[1]: Started libvirt secret daemon. Nov 28 05:01:10 localhost systemd-machined[201641]: New machine qemu-1-instance-00000007. Nov 28 05:01:10 localhost systemd[1]: Started Virtual Machine qemu-1-instance-00000007. Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.601 280172 DEBUG nova.virt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.602 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] VM Resumed (Lifecycle Event)#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.629 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.631 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.631 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.635 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.638 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance spawned successfully.#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.639 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.652 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.653 280172 DEBUG nova.virt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.653 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] VM Started (Lifecycle Event)#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.675 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.681 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.685 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.686 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.686 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.687 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.687 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.688 280172 DEBUG nova.virt.libvirt.driver [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.711 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.740 280172 INFO nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Took 7.52 seconds to spawn the instance on the hypervisor.#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.741 280172 DEBUG nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.794 280172 INFO nova.compute.manager [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Took 8.46 seconds to build instance.#033[00m Nov 28 05:01:10 localhost nova_compute[280168]: 2025-11-28 10:01:10.821 280172 DEBUG oslo_concurrency.lockutils [None req-c08be0ce-6d64-47f2-999a-9837d83fcc2e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.._locked_do_build_and_run_instance" :: held 8.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 174 op/s Nov 28 05:01:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:11.682 2 INFO neutron.agent.securitygroups_rpc [None req-64ca5811-bbe2-4768-b546-3f3c65a295fa c64867c2bac34a819c0995d0b72ee9a7 3e4b394501d24dc7954ec5d2f27b8081 - - default default] Security group member updated ['dc0a6e12-205a-4d7d-adb2-6545f08f7990']#033[00m Nov 28 05:01:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:11 localhost nova_compute[280168]: 2025-11-28 10:01:11.907 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "7292509e-f294-4159-96e5-22d4712df2a0" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:11 localhost nova_compute[280168]: 2025-11-28 10:01:11.907 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:11 localhost nova_compute[280168]: 2025-11-28 10:01:11.907 280172 INFO nova.compute.manager [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Shelving#033[00m Nov 28 05:01:11 localhost nova_compute[280168]: 2025-11-28 10:01:11.927 280172 DEBUG nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Nov 28 05:01:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:13.332 2 INFO neutron.agent.securitygroups_rpc [None req-0bbc429e-2462-4e1e-9b65-ff3c254999c8 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Security group member updated ['8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e']#033[00m Nov 28 05:01:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 4.7 MiB/s wr, 174 op/s Nov 28 05:01:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:13.672 2 INFO neutron.agent.securitygroups_rpc [None req-8218c41f-1821-4284-8a71-4b98eaf9d107 c64867c2bac34a819c0995d0b72ee9a7 3e4b394501d24dc7954ec5d2f27b8081 - - default default] Security group member updated ['dc0a6e12-205a-4d7d-adb2-6545f08f7990']#033[00m Nov 28 05:01:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 4.4 MiB/s wr, 161 op/s Nov 28 05:01:15 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:15.692 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:15Z, description=, device_id=350c5687-2c97-42e6-96bf-0b6c681cec37, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=23022a72-ee36-49ce-914d-58b90ee87225, ip_allocation=immediate, mac_address=fa:16:3e:c3:79:bb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=578, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:01:15Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:01:15 localhost podman[307454]: 2025-11-28 10:01:15.909882972 +0000 UTC m=+0.064463931 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:15 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 6 addresses Nov 28 05:01:15 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:15 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:16 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:16.210 261346 INFO neutron.agent.dhcp.agent [None req-a49c3bc0-dd51-49a0-a914-bc9dcf3b4703 - - - - - -] DHCP configuration for ports {'23022a72-ee36-49ce-914d-58b90ee87225'} is completed#033[00m Nov 28 05:01:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:01:16 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:01:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:01:16 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:01:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:01:16 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 6ca18e3b-b6e0-430a-aa47-28e6913af879 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:01:16 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 6ca18e3b-b6e0-430a-aa47-28e6913af879 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:01:16 localhost ceph-mgr[286188]: [progress INFO root] Completed event 6ca18e3b-b6e0-430a-aa47-28e6913af879 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:01:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:01:16 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:01:16 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:01:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:01:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:17 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:17.041 2 INFO neutron.agent.securitygroups_rpc [None req-2679fbf8-01c0-46c1-b86d-7a154868a163 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Security group member updated ['8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e']#033[00m Nov 28 05:01:17 localhost systemd[1]: tmp-crun.ltrGx4.mount: Deactivated successfully. Nov 28 05:01:17 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:01:17 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:17 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:17 localhost podman[307539]: 2025-11-28 10:01:17.26996829 +0000 UTC m=+0.079721074 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:01:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 194 op/s Nov 28 05:01:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:01:18 localhost systemd[1]: tmp-crun.k4oOFu.mount: Deactivated successfully. Nov 28 05:01:18 localhost podman[307560]: 2025-11-28 10:01:18.978237583 +0000 UTC m=+0.081900670 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal) Nov 28 05:01:18 localhost podman[307560]: 2025-11-28 10:01:18.997572848 +0000 UTC m=+0.101235955 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 05:01:19 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:01:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 873 KiB/s wr, 86 op/s Nov 28 05:01:20 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:01:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:01:20 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:01:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 873 KiB/s wr, 93 op/s Nov 28 05:01:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:21 localhost nova_compute[280168]: 2025-11-28 10:01:21.987 280172 DEBUG nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Nov 28 05:01:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:22.152 261346 INFO neutron.agent.linux.ip_lib [None req-f5fd574f-afee-45bd-b13e-c250e5c4c52d - - - - - -] Device tap52bd411f-3a cannot be used as it has no MAC address#033[00m Nov 28 05:01:22 localhost kernel: device tap52bd411f-3a entered promiscuous mode Nov 28 05:01:22 localhost NetworkManager[5965]: [1764324082.1862] manager: (tap52bd411f-3a): new Generic device (/org/freedesktop/NetworkManager/Devices/16) Nov 28 05:01:22 localhost ovn_controller[152726]: 2025-11-28T10:01:22Z|00045|binding|INFO|Claiming lport 52bd411f-3a65-4ceb-b07c-480829869bbb for this chassis. Nov 28 05:01:22 localhost ovn_controller[152726]: 2025-11-28T10:01:22Z|00046|binding|INFO|52bd411f-3a65-4ceb-b07c-480829869bbb: Claiming unknown Nov 28 05:01:22 localhost systemd-udevd[307591]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:01:22 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:22.206 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-8a9132e3-6bf5-4fa5-8eac-9650725d34b1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a9132e3-6bf5-4fa5-8eac-9650725d34b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ad5cc945c4a4ceda603318537f79333', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2ba1c0c-781e-42ae-a904-9ebfc98b36b5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=52bd411f-3a65-4ceb-b07c-480829869bbb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:22 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:22.214 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 52bd411f-3a65-4ceb-b07c-480829869bbb in datapath 8a9132e3-6bf5-4fa5-8eac-9650725d34b1 bound to our chassis#033[00m Nov 28 05:01:22 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:22.219 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port aebf4a7f-5482-4fc7-9998-c2f271e1ebc5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:01:22 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:22.219 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a9132e3-6bf5-4fa5-8eac-9650725d34b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:01:22 localhost ovn_controller[152726]: 2025-11-28T10:01:22Z|00047|binding|INFO|Setting lport 52bd411f-3a65-4ceb-b07c-480829869bbb ovn-installed in OVS Nov 28 05:01:22 localhost ovn_controller[152726]: 2025-11-28T10:01:22Z|00048|binding|INFO|Setting lport 52bd411f-3a65-4ceb-b07c-480829869bbb up in Southbound Nov 28 05:01:22 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:22.220 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[e898b86a-92b2-4671-a21c-4702406cfee6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:22.955 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:22Z, description=, device_id=65f73f16-3a25-49b4-8bf7-1a58246c1063, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ca7b4bb7-a367-4c61-b81b-96aa3e7c9cd9, ip_allocation=immediate, mac_address=fa:16:3e:81:be:83, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=624, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:01:22Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:01:23 localhost podman[307657]: Nov 28 05:01:23 localhost systemd[1]: tmp-crun.0jCTre.mount: Deactivated successfully. Nov 28 05:01:23 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 6 addresses Nov 28 05:01:23 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:23 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:23 localhost podman[307668]: 2025-11-28 10:01:23.264060394 +0000 UTC m=+0.103041730 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:01:23 localhost podman[307657]: 2025-11-28 10:01:23.171243694 +0000 UTC m=+0.043077804 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:01:23 localhost podman[307657]: 2025-11-28 10:01:23.299528156 +0000 UTC m=+0.171362186 container create f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Nov 28 05:01:23 localhost systemd[1]: Started libpod-conmon-f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b.scope. Nov 28 05:01:23 localhost systemd[1]: Started libcrun container. Nov 28 05:01:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93a797d5722443d4145967b946d93624c8318af2c1cabbcd44f075fa585e6ab9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:01:23 localhost podman[307657]: 2025-11-28 10:01:23.367386759 +0000 UTC m=+0.239220829 container init f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:01:23 localhost podman[307657]: 2025-11-28 10:01:23.372967058 +0000 UTC m=+0.244801118 container start f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:01:23 localhost dnsmasq[307696]: started, version 2.85 cachesize 150 Nov 28 05:01:23 localhost dnsmasq[307696]: DNS service limited to local subnets Nov 28 05:01:23 localhost dnsmasq[307696]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:01:23 localhost dnsmasq[307696]: warning: no upstream servers configured Nov 28 05:01:23 localhost dnsmasq-dhcp[307696]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:01:23 localhost dnsmasq[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/addn_hosts - 0 addresses Nov 28 05:01:23 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/host Nov 28 05:01:23 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/opts Nov 28 05:01:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:23.529 261346 INFO neutron.agent.dhcp.agent [None req-5e966c0d-cb2e-41e4-bcbd-f77ef4de119c - - - - - -] DHCP configuration for ports {'56884b0d-9cf9-44f8-ba7a-adbb3b5ac5b6', 'ca7b4bb7-a367-4c61-b81b-96aa3e7c9cd9'} is completed#033[00m Nov 28 05:01:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v107: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 56 op/s Nov 28 05:01:24 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:01:24 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:24 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:24 localhost podman[307714]: 2025-11-28 10:01:24.191321413 +0000 UTC m=+0.069155984 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:01:25 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:01:25 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:25 localhost podman[307753]: 2025-11-28 10:01:25.477362289 +0000 UTC m=+0.065913626 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:25 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 192 MiB data, 793 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 56 op/s Nov 28 05:01:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:26.904 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:26Z, description=, device_id=65f73f16-3a25-49b4-8bf7-1a58246c1063, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=22f795b4-891f-4309-a925-624689a94701, ip_allocation=immediate, mac_address=fa:16:3e:23:a9:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:01:19Z, description=, dns_domain=, id=8a9132e3-6bf5-4fa5-8eac-9650725d34b1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1017617356-network, port_security_enabled=True, project_id=6ad5cc945c4a4ceda603318537f79333, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27950, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=602, status=ACTIVE, subnets=['0a395c59-e6ba-4384-8c1a-d6113e5b2e22'], tags=[], tenant_id=6ad5cc945c4a4ceda603318537f79333, updated_at=2025-11-28T10:01:20Z, vlan_transparent=None, network_id=8a9132e3-6bf5-4fa5-8eac-9650725d34b1, port_security_enabled=False, project_id=6ad5cc945c4a4ceda603318537f79333, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=637, status=DOWN, tags=[], tenant_id=6ad5cc945c4a4ceda603318537f79333, updated_at=2025-11-28T10:01:26Z on network 8a9132e3-6bf5-4fa5-8eac-9650725d34b1#033[00m Nov 28 05:01:27 localhost systemd[1]: tmp-crun.HK96Nl.mount: Deactivated successfully. Nov 28 05:01:27 localhost dnsmasq[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/addn_hosts - 1 addresses Nov 28 05:01:27 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/host Nov 28 05:01:27 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/opts Nov 28 05:01:27 localhost podman[307792]: 2025-11-28 10:01:27.365220965 +0000 UTC m=+0.064842043 container kill f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 05:01:27 localhost openstack_network_exporter[240973]: ERROR 10:01:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:01:27 localhost openstack_network_exporter[240973]: ERROR 10:01:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:01:27 localhost openstack_network_exporter[240973]: ERROR 10:01:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:01:27 localhost openstack_network_exporter[240973]: ERROR 10:01:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:01:27 localhost openstack_network_exporter[240973]: Nov 28 05:01:27 localhost openstack_network_exporter[240973]: ERROR 10:01:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:01:27 localhost openstack_network_exporter[240973]: Nov 28 05:01:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:27.602 261346 INFO neutron.agent.dhcp.agent [None req-6a311664-682b-4695-a087-d4a6a15e9202 - - - - - -] DHCP configuration for ports {'22f795b4-891f-4309-a925-624689a94701'} is completed#033[00m Nov 28 05:01:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v109: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 154 op/s Nov 28 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:01:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:01:28 localhost podman[239012]: time="2025-11-28T10:01:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:01:28 localhost podman[239012]: @ - - [28/Nov/2025:10:01:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159978 "" "Go-http-client/1.1" Nov 28 05:01:29 localhost podman[307816]: 2025-11-28 10:01:28.991769375 +0000 UTC m=+0.088777787 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:01:29 localhost systemd[1]: tmp-crun.QO9O6g.mount: Deactivated successfully. Nov 28 05:01:29 localhost podman[307815]: 2025-11-28 10:01:29.047498802 +0000 UTC m=+0.147534276 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:01:29 localhost podman[307814]: 2025-11-28 10:01:29.021510166 +0000 UTC m=+0.124481319 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:01:29 localhost podman[239012]: @ - - [28/Nov/2025:10:01:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20137 "" "Go-http-client/1.1" Nov 28 05:01:29 localhost podman[307814]: 2025-11-28 10:01:29.105429675 +0000 UTC m=+0.208400818 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:01:29 localhost podman[307816]: 2025-11-28 10:01:29.118631354 +0000 UTC m=+0.215639766 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:29 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:01:29 localhost podman[307815]: 2025-11-28 10:01:29.137723532 +0000 UTC m=+0.237759026 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller) Nov 28 05:01:29 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:01:29 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:01:29 localhost podman[307822]: 2025-11-28 10:01:29.200962876 +0000 UTC m=+0.289847612 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:01:29 localhost podman[307822]: 2025-11-28 10:01:29.24043333 +0000 UTC m=+0.329318066 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:01:29 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:01:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 105 op/s Nov 28 05:01:29 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:29.652 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:26Z, description=, device_id=65f73f16-3a25-49b4-8bf7-1a58246c1063, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=22f795b4-891f-4309-a925-624689a94701, ip_allocation=immediate, mac_address=fa:16:3e:23:a9:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:01:19Z, description=, dns_domain=, id=8a9132e3-6bf5-4fa5-8eac-9650725d34b1, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-1017617356-network, port_security_enabled=True, project_id=6ad5cc945c4a4ceda603318537f79333, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27950, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=602, status=ACTIVE, subnets=['0a395c59-e6ba-4384-8c1a-d6113e5b2e22'], tags=[], tenant_id=6ad5cc945c4a4ceda603318537f79333, updated_at=2025-11-28T10:01:20Z, vlan_transparent=None, network_id=8a9132e3-6bf5-4fa5-8eac-9650725d34b1, port_security_enabled=False, project_id=6ad5cc945c4a4ceda603318537f79333, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=637, status=DOWN, tags=[], tenant_id=6ad5cc945c4a4ceda603318537f79333, updated_at=2025-11-28T10:01:26Z on network 8a9132e3-6bf5-4fa5-8eac-9650725d34b1#033[00m Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0. Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.870362) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22 Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324089870447, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1173, "num_deletes": 251, "total_data_size": 1335303, "memory_usage": 1355672, "flush_reason": "Manual Compaction"} Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324089878990, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 599688, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16275, "largest_seqno": 17443, "table_properties": {"data_size": 595874, "index_size": 1477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10741, "raw_average_key_size": 21, "raw_value_size": 587279, "raw_average_value_size": 1149, "num_data_blocks": 66, "num_entries": 511, "num_filter_entries": 511, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324014, "oldest_key_time": 1764324014, "file_creation_time": 1764324089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}} Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 8739 microseconds, and 2993 cpu microseconds. Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.879109) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 599688 bytes OK Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.879134) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.881230) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.881254) EVENT_LOG_v1 {"time_micros": 1764324089881248, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.881276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1329577, prev total WAL file size 1329901, number of live WAL files 2. Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.882053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373630' seq:72057594037927935, type:22 .. '6D6772737461740034303132' seq:0, type:0; will stop at (end) Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(585KB)], [21(18MB)] Nov 28 05:01:29 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324089882160, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 20054923, "oldest_snapshot_seqno": -1} Nov 28 05:01:29 localhost dnsmasq[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/addn_hosts - 1 addresses Nov 28 05:01:29 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/host Nov 28 05:01:29 localhost podman[307918]: 2025-11-28 10:01:29.997827079 +0000 UTC m=+0.058174862 container kill f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:01:29 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/opts Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12084 keys, 18080893 bytes, temperature: kUnknown Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324090023505, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 18080893, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18012921, "index_size": 36639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 324123, "raw_average_key_size": 26, "raw_value_size": 17808263, "raw_average_value_size": 1473, "num_data_blocks": 1394, "num_entries": 12084, "num_filter_entries": 12084, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324089, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}} Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.023709) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 18080893 bytes Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.026342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.8 rd, 127.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 18.6 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(63.6) write-amplify(30.2) OK, records in: 12576, records dropped: 492 output_compression: NoCompression Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.026360) EVENT_LOG_v1 {"time_micros": 1764324090026353, "job": 10, "event": "compaction_finished", "compaction_time_micros": 141392, "compaction_time_cpu_micros": 50415, "output_level": 6, "num_output_files": 1, "total_output_size": 18080893, "num_input_records": 12576, "num_output_records": 12084, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324090026500, "job": 10, "event": "table_file_deletion", "file_number": 23} Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324090028223, "job": 10, "event": "table_file_deletion", "file_number": 21} Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:29.881919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.028255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.028261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.028264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.028267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:01:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:01:30.028270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:01:30 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:30.413 261346 INFO neutron.agent.dhcp.agent [None req-45ed9bd7-c99a-4676-a976-a30d6711f3f7 - - - - - -] DHCP configuration for ports {'22f795b4-891f-4309-a925-624689a94701'} is completed#033[00m Nov 28 05:01:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v111: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 158 op/s Nov 28 05:01:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:01:31 localhost systemd[1]: tmp-crun.D40X4C.mount: Deactivated successfully. Nov 28 05:01:31 localhost podman[307938]: 2025-11-28 10:01:31.984090744 +0000 UTC m=+0.087525500 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:01:31 localhost podman[307938]: 2025-11-28 10:01:31.999553842 +0000 UTC m=+0.102988588 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:01:32 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.038 280172 DEBUG nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Nov 28 05:01:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 151 op/s Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.732 280172 DEBUG nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Creating tmpfile /var/lib/nova/instances/tmpx5ac6ig2 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.760 280172 DEBUG nova.compute.manager [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] destination check data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpx5ac6ig2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=,is_shared_block_storage=,is_shared_instance_path=,is_volume_backed=,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.783 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.783 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.792 280172 INFO nova.compute.rpcapi [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m Nov 28 05:01:33 localhost nova_compute[280168]: 2025-11-28 10:01:33.793 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.431 280172 DEBUG nova.compute.manager [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpx5ac6ig2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='c06e2ffc-a8af-41b6-ab88-680ef1f6fe50',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids=,serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.462 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquiring lock "refresh_cache-c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.463 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquired lock "refresh_cache-c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.463 280172 DEBUG nova.network.neutron [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 28 05:01:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.912 280172 DEBUG nova.network.neutron [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Updating instance_info_cache with network_info: [{"id": "62b8533f-b250-4475-80c2-28c4543536b5", "address": "fa:16:3e:58:68:3c", "network": {"id": "ad2d8cf7-987d-4804-acbd-9b3e248dc8cd", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1085829019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "310745a04bd441169ff77f55ccf6bd7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62b8533f-b2", "ovs_interfaceid": "62b8533f-b250-4475-80c2-28c4543536b5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.930 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Releasing lock "refresh_cache-c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.932 280172 DEBUG nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=,block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpx5ac6ig2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='c06e2ffc-a8af-41b6-ab88-680ef1f6fe50',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=,src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=,target_connect_addr=,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.933 280172 DEBUG nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Creating instance directory: /var/lib/nova/instances/c06e2ffc-a8af-41b6-ab88-680ef1f6fe50 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.934 280172 DEBUG nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Ensure instance console log exists: /var/lib/nova/instances/c06e2ffc-a8af-41b6-ab88-680ef1f6fe50/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.934 280172 DEBUG nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.936 280172 DEBUG nova.virt.libvirt.vif [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-28T10:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-915340611',display_name='tempest-LiveMigrationTest-server-915340611',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005538513.localdomain',hostname='tempest-livemigrationtest-server-915340611',id=8,image_ref='85968a96-5a0e-43a4-9c04-3954f640a7ed',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-11-28T10:01:29Z,launched_on='np0005538513.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005538513.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='310745a04bd441169ff77f55ccf6bd7b',ramdisk_id='',reservation_id='r-1g332w05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='85968a96-5a0e-43a4-9c04-3954f640a7ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-480152442',owner_user_name='tempest-LiveMigrationTest-480152442-project-member'},tags=,task_state='migrating',terminated_at=None,trusted_certs=,updated_at=2025-11-28T10:01:29Z,user_data=None,user_id='318114281cb649bc9eeed12ecdc7273f',uuid=c06e2ffc-a8af-41b6-ab88-680ef1f6fe50,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62b8533f-b250-4475-80c2-28c4543536b5", "address": "fa:16:3e:58:68:3c", "network": {"id": "ad2d8cf7-987d-4804-acbd-9b3e248dc8cd", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1085829019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "310745a04bd441169ff77f55ccf6bd7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap62b8533f-b2", "ovs_interfaceid": "62b8533f-b250-4475-80c2-28c4543536b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.936 280172 DEBUG nova.network.os_vif_util [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Converting VIF {"id": "62b8533f-b250-4475-80c2-28c4543536b5", "address": "fa:16:3e:58:68:3c", "network": {"id": "ad2d8cf7-987d-4804-acbd-9b3e248dc8cd", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1085829019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "310745a04bd441169ff77f55ccf6bd7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap62b8533f-b2", "ovs_interfaceid": "62b8533f-b250-4475-80c2-28c4543536b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.937 280172 DEBUG nova.network.os_vif_util [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:68:3c,bridge_name='br-int',has_traffic_filtering=True,id=62b8533f-b250-4475-80c2-28c4543536b5,network=Network(ad2d8cf7-987d-4804-acbd-9b3e248dc8cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62b8533f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.938 280172 DEBUG os_vif [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:68:3c,bridge_name='br-int',has_traffic_filtering=True,id=62b8533f-b250-4475-80c2-28c4543536b5,network=Network(ad2d8cf7-987d-4804-acbd-9b3e248dc8cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62b8533f-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m Nov 28 05:01:34 localhost podman[307962]: 2025-11-28 10:01:34.970134713 +0000 UTC m=+0.077913499 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 05:01:34 localhost podman[307962]: 2025-11-28 10:01:34.984503597 +0000 UTC m=+0.092282363 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.985 280172 DEBUG ovsdbapp.backend.ovs_idl [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.986 280172 DEBUG ovsdbapp.backend.ovs_idl [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.986 280172 DEBUG ovsdbapp.backend.ovs_idl [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.987 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.987 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.987 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.988 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.990 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:34 localhost nova_compute[280168]: 2025-11-28 10:01:34.995 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:35 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.026 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.027 280172 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.027 280172 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.029 280172 INFO oslo.privsep.daemon [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpfkf_65jq/privsep.sock']#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.255 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:35 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully. Nov 28 05:01:35 localhost systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 13.173s CPU time. Nov 28 05:01:35 localhost systemd-machined[201641]: Machine qemu-1-instance-00000007 terminated. Nov 28 05:01:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v113: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 151 op/s Nov 28 05:01:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:01:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.693 280172 INFO oslo.privsep.daemon [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Spawned new privsep daemon via rootwrap#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.584 307986 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.589 307986 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.593 307986 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.593 307986 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307986#033[00m Nov 28 05:01:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:01:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:01:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:01:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.959 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.960 280172 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62b8533f-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.960 280172 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62b8533f-b2, col_values=(('external_ids', {'iface-id': '62b8533f-b250-4475-80c2-28c4543536b5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:68:3c', 'vm-uuid': 'c06e2ffc-a8af-41b6-ab88-680ef1f6fe50'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.966 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.968 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.970 280172 INFO os_vif [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:68:3c,bridge_name='br-int',has_traffic_filtering=True,id=62b8533f-b250-4475-80c2-28c4543536b5,network=Network(ad2d8cf7-987d-4804-acbd-9b3e248dc8cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62b8533f-b2')#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.971 280172 DEBUG nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m Nov 28 05:01:35 localhost nova_compute[280168]: 2025-11-28 10:01:35.972 280172 DEBUG nova.compute.manager [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpx5ac6ig2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='c06e2ffc-a8af-41b6-ab88-680ef1f6fe50',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.054 280172 INFO nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance shutdown successfully after 24 seconds.#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.066 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance destroyed successfully.#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.067 280172 DEBUG nova.objects.instance [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.159 280172 INFO nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Beginning cold snapshot process#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.394 280172 DEBUG nova.virt.libvirt.imagebackend [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] No parent info for 85968a96-5a0e-43a4-9c04-3954f640a7ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Nov 28 05:01:36 localhost nova_compute[280168]: 2025-11-28 10:01:36.428 280172 DEBUG nova.storage.rbd_utils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] creating snapshot(ea994f4a379648c6a3c88fbbe63e049a) on rbd image(7292509e-f294-4159-96e5-22d4712df2a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Nov 28 05:01:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e98 e98: 6 total, 6 up, 6 in Nov 28 05:01:37 localhost nova_compute[280168]: 2025-11-28 10:01:37.187 280172 DEBUG nova.storage.rbd_utils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] cloning vms/7292509e-f294-4159-96e5-22d4712df2a0_disk@ea994f4a379648c6a3c88fbbe63e049a to images/c045142b-5f2b-4f4d-80b7-ca5ee791067d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Nov 28 05:01:37 localhost nova_compute[280168]: 2025-11-28 10:01:37.242 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:37 localhost dnsmasq[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/addn_hosts - 0 addresses Nov 28 05:01:37 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/host Nov 28 05:01:37 localhost dnsmasq-dhcp[307696]: read /var/lib/neutron/dhcp/8a9132e3-6bf5-4fa5-8eac-9650725d34b1/opts Nov 28 05:01:37 localhost podman[308074]: 2025-11-28 10:01:37.290642093 +0000 UTC m=+0.063638397 container kill f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:37 localhost nova_compute[280168]: 2025-11-28 10:01:37.358 280172 DEBUG nova.storage.rbd_utils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] flattening images/c045142b-5f2b-4f4d-80b7-ca5ee791067d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Nov 28 05:01:37 localhost nova_compute[280168]: 2025-11-28 10:01:37.466 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:37 localhost ovn_controller[152726]: 2025-11-28T10:01:37Z|00049|binding|INFO|Releasing lport 52bd411f-3a65-4ceb-b07c-480829869bbb from this chassis (sb_readonly=0) Nov 28 05:01:37 localhost kernel: device tap52bd411f-3a left promiscuous mode Nov 28 05:01:37 localhost ovn_controller[152726]: 2025-11-28T10:01:37Z|00050|binding|INFO|Setting lport 52bd411f-3a65-4ceb-b07c-480829869bbb down in Southbound Nov 28 05:01:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:37.482 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-8a9132e3-6bf5-4fa5-8eac-9650725d34b1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a9132e3-6bf5-4fa5-8eac-9650725d34b1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ad5cc945c4a4ceda603318537f79333', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e2ba1c0c-781e-42ae-a904-9ebfc98b36b5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=52bd411f-3a65-4ceb-b07c-480829869bbb) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:37.485 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 52bd411f-3a65-4ceb-b07c-480829869bbb in datapath 8a9132e3-6bf5-4fa5-8eac-9650725d34b1 unbound from our chassis#033[00m Nov 28 05:01:37 localhost nova_compute[280168]: 2025-11-28 10:01:37.489 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:37.489 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a9132e3-6bf5-4fa5-8eac-9650725d34b1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:01:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:37.491 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[03049ccf-6269-425f-bf3f-cf28893e1154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 2.3 MiB/s rd, 32 KiB/s wr, 87 op/s Nov 28 05:01:38 localhost nova_compute[280168]: 2025-11-28 10:01:38.273 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:38 localhost nova_compute[280168]: 2025-11-28 10:01:38.276 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:38 localhost nova_compute[280168]: 2025-11-28 10:01:38.295 280172 DEBUG nova.storage.rbd_utils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] removing snapshot(ea994f4a379648c6a3c88fbbe63e049a) on rbd image(7292509e-f294-4159-96e5-22d4712df2a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Nov 28 05:01:38 localhost nova_compute[280168]: 2025-11-28 10:01:38.569 280172 DEBUG nova.network.neutron [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Port 62b8533f-b250-4475-80c2-28c4543536b5 updated with migration profile {'migrating_to': 'np0005538515.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m Nov 28 05:01:38 localhost nova_compute[280168]: 2025-11-28 10:01:38.572 280172 DEBUG nova.compute.manager [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=False,dst_numa_info=,dst_supports_numa_live_migration=,dst_wants_file_backed_memory=False,file_backed_memory_discard=,filename='tmpx5ac6ig2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='c06e2ffc-a8af-41b6-ab88-680ef1f6fe50',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=,src_supports_numa_live_migration=,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m Nov 28 05:01:38 localhost sshd[308154]: main: sshd: ssh-rsa algorithm is disabled Nov 28 05:01:38 localhost systemd-logind[763]: New session 73 of user nova. Nov 28 05:01:38 localhost systemd[1]: Created slice User Slice of UID 42436. Nov 28 05:01:38 localhost systemd[1]: Starting User Runtime Directory /run/user/42436... Nov 28 05:01:38 localhost systemd[1]: Finished User Runtime Directory /run/user/42436. Nov 28 05:01:38 localhost systemd[1]: Starting User Manager for UID 42436... Nov 28 05:01:39 localhost systemd[308158]: Queued start job for default target Main User Target. Nov 28 05:01:39 localhost systemd[308158]: Created slice User Application Slice. Nov 28 05:01:39 localhost systemd[308158]: Started Mark boot as successful after the user session has run 2 minutes. Nov 28 05:01:39 localhost systemd[308158]: Started Daily Cleanup of User's Temporary Directories. Nov 28 05:01:39 localhost systemd[308158]: Reached target Paths. Nov 28 05:01:39 localhost systemd[308158]: Reached target Timers. Nov 28 05:01:39 localhost systemd[308158]: Starting D-Bus User Message Bus Socket... Nov 28 05:01:39 localhost systemd[308158]: Starting Create User's Volatile Files and Directories... Nov 28 05:01:39 localhost systemd[308158]: Finished Create User's Volatile Files and Directories. Nov 28 05:01:39 localhost systemd[308158]: Listening on D-Bus User Message Bus Socket. Nov 28 05:01:39 localhost systemd[308158]: Reached target Sockets. Nov 28 05:01:39 localhost systemd[308158]: Reached target Basic System. Nov 28 05:01:39 localhost systemd[308158]: Reached target Main User Target. Nov 28 05:01:39 localhost systemd[308158]: Startup finished in 159ms. Nov 28 05:01:39 localhost systemd[1]: Started User Manager for UID 42436. Nov 28 05:01:39 localhost systemd[1]: Started Session 73 of User nova. Nov 28 05:01:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e99 e99: 6 total, 6 up, 6 in Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.244 280172 DEBUG nova.storage.rbd_utils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] creating snapshot(snap) on rbd image(c045142b-5f2b-4f4d-80b7-ca5ee791067d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Nov 28 05:01:39 localhost kernel: tun: Universal TUN/TAP device driver, 1.6 Nov 28 05:01:39 localhost kernel: device tap62b8533f-b2 entered promiscuous mode Nov 28 05:01:39 localhost NetworkManager[5965]: [1764324099.3103] manager: (tap62b8533f-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/17) Nov 28 05:01:39 localhost systemd-udevd[308210]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:01:39 localhost NetworkManager[5965]: [1764324099.3234] device (tap62b8533f-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external') Nov 28 05:01:39 localhost NetworkManager[5965]: [1764324099.3243] device (tap62b8533f-b2): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external') Nov 28 05:01:39 localhost ovn_controller[152726]: 2025-11-28T10:01:39Z|00051|binding|INFO|Claiming lport 62b8533f-b250-4475-80c2-28c4543536b5 for this additional chassis. Nov 28 05:01:39 localhost ovn_controller[152726]: 2025-11-28T10:01:39Z|00052|binding|INFO|62b8533f-b250-4475-80c2-28c4543536b5: Claiming fa:16:3e:58:68:3c 10.100.0.12 Nov 28 05:01:39 localhost ovn_controller[152726]: 2025-11-28T10:01:39Z|00053|binding|INFO|Claiming lport fc82099a-3702-4952-add7-ba3d39b895a0 for this additional chassis. Nov 28 05:01:39 localhost ovn_controller[152726]: 2025-11-28T10:01:39Z|00054|binding|INFO|fc82099a-3702-4952-add7-ba3d39b895a0: Claiming fa:16:3e:41:3c:a8 19.80.0.139 Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.378 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.408 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:39 localhost systemd-machined[201641]: New machine qemu-2-instance-00000008. Nov 28 05:01:39 localhost ovn_controller[152726]: 2025-11-28T10:01:39Z|00055|binding|INFO|Setting lport 62b8533f-b250-4475-80c2-28c4543536b5 ovn-installed in OVS Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.417 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:39 localhost systemd[1]: Started Virtual Machine qemu-2-instance-00000008. Nov 28 05:01:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 271 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 518 KiB/s rd, 23 KiB/s wr, 28 op/s Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.683 280172 DEBUG nova.virt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:39 localhost nova_compute[280168]: 2025-11-28 10:01:39.684 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] VM Started (Lifecycle Event)#033[00m Nov 28 05:01:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e100 e100: 6 total, 6 up, 6 in Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.320 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.341 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.341 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquired lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.341 280172 DEBUG nova.network.neutron [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.341 280172 DEBUG nova.objects.instance [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.384 280172 DEBUG nova.virt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.384 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] VM Resumed (Lifecycle Event)#033[00m Nov 28 05:01:40 localhost systemd[1]: session-73.scope: Deactivated successfully. Nov 28 05:01:40 localhost systemd-logind[763]: Session 73 logged out. Waiting for processes to exit. Nov 28 05:01:40 localhost systemd-logind[763]: Removed session 73. Nov 28 05:01:40 localhost nova_compute[280168]: 2025-11-28 10:01:40.963 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:41 localhost nova_compute[280168]: 2025-11-28 10:01:41.050 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:41 localhost nova_compute[280168]: 2025-11-28 10:01:41.054 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 28 05:01:41 localhost nova_compute[280168]: 2025-11-28 10:01:41.608 280172 INFO nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Snapshot image upload complete#033[00m Nov 28 05:01:41 localhost nova_compute[280168]: 2025-11-28 10:01:41.609 280172 DEBUG nova.compute.manager [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.5 MiB/s rd, 7.8 MiB/s wr, 210 op/s Nov 28 05:01:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:41 localhost nova_compute[280168]: 2025-11-28 10:01:41.993 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] During the sync_power process the instance has moved from host np0005538513.localdomain to host np0005538515.localdomain#033[00m Nov 28 05:01:42 localhost nova_compute[280168]: 2025-11-28 10:01:42.938 280172 INFO nova.compute.manager [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Shelve offloading#033[00m Nov 28 05:01:42 localhost nova_compute[280168]: 2025-11-28 10:01:42.945 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance destroyed successfully.#033[00m Nov 28 05:01:42 localhost nova_compute[280168]: 2025-11-28 10:01:42.945 280172 DEBUG nova.compute.manager [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:42 localhost nova_compute[280168]: 2025-11-28 10:01:42.948 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:01:42 localhost nova_compute[280168]: 2025-11-28 10:01:42.952 280172 DEBUG nova.network.neutron [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 28 05:01:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:43.078 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:43.079 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.109 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.247 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.464 280172 DEBUG nova.network.neutron [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.477 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Releasing lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.478 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.478 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquired lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.478 280172 DEBUG nova.network.neutron [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.479 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 157 op/s Nov 28 05:01:43 localhost nova_compute[280168]: 2025-11-28 10:01:43.717 280172 DEBUG nova.network.neutron [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 28 05:01:43 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:01:43 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:43 localhost podman[308282]: 2025-11-28 10:01:43.917814683 +0000 UTC m=+0.044395255 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:01:43 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.061 280172 DEBUG nova.network.neutron [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.154 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Releasing lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.160 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance destroyed successfully.#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.161 280172 DEBUG nova.objects.instance [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lazy-loading 'resources' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.256 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.257 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.257 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.257 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.258 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.416 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:44 localhost ovn_controller[152726]: 2025-11-28T10:01:44Z|00056|binding|INFO|Claiming lport 62b8533f-b250-4475-80c2-28c4543536b5 for this chassis. Nov 28 05:01:44 localhost ovn_controller[152726]: 2025-11-28T10:01:44Z|00057|binding|INFO|62b8533f-b250-4475-80c2-28c4543536b5: Claiming fa:16:3e:58:68:3c 10.100.0.12 Nov 28 05:01:44 localhost ovn_controller[152726]: 2025-11-28T10:01:44Z|00058|binding|INFO|Claiming lport fc82099a-3702-4952-add7-ba3d39b895a0 for this chassis. Nov 28 05:01:44 localhost ovn_controller[152726]: 2025-11-28T10:01:44Z|00059|binding|INFO|fc82099a-3702-4952-add7-ba3d39b895a0: Claiming fa:16:3e:41:3c:a8 19.80.0.139 Nov 28 05:01:44 localhost ovn_controller[152726]: 2025-11-28T10:01:44Z|00060|binding|INFO|Setting lport 62b8533f-b250-4475-80c2-28c4543536b5 up in Southbound Nov 28 05:01:44 localhost ovn_controller[152726]: 2025-11-28T10:01:44Z|00061|binding|INFO|Setting lport fc82099a-3702-4952-add7-ba3d39b895a0 up in Southbound Nov 28 05:01:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:44.746 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:3c:a8 19.80.0.139'], port_security=['fa:16:3e:41:3c:a8 19.80.0.139'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['62b8533f-b250-4475-80c2-28c4543536b5'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-957922340', 'neutron:cidrs': '19.80.0.139/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-957922340', 'neutron:project_id': '310745a04bd441169ff77f55ccf6bd7b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=96ffb618-d617-4e8c-a498-acb365ae5313, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=fc82099a-3702-4952-add7-ba3d39b895a0) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:44.751 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:68:3c 10.100.0.12'], port_security=['fa:16:3e:58:68:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-191355626', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c06e2ffc-a8af-41b6-ab88-680ef1f6fe50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-191355626', 'neutron:project_id': '310745a04bd441169ff77f55ccf6bd7b', 'neutron:revision_number': '9', 'neutron:security_group_ids': '8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538513.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b393f93f-1891-43a2-aa26-a4cab2642f74, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=62b8533f-b250-4475-80c2-28c4543536b5) old=Port_Binding(up=[False], additional_chassis=[], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:44.755 158530 INFO neutron.agent.ovn.metadata.agent [-] Port fc82099a-3702-4952-add7-ba3d39b895a0 in datapath 492ef1de-4a68-49e4-b736-13cdb2eb7b59 bound to our chassis#033[00m Nov 28 05:01:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:44.762 158530 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 492ef1de-4a68-49e4-b736-13cdb2eb7b59#033[00m Nov 28 05:01:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:01:44 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/701571942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.806 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:44.816 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 req-a0dc2833-791a-4b03-82d4-55f61a32c76b 9e5033e84dec44f4956046cabe7e22af e2c76e4d27554fd5a4f85cce208b136f - - default default] This port is not SRIOV, skip binding for port 62b8533f-b250-4475-80c2-28c4543536b5.#033[00m Nov 28 05:01:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e101 e101: 6 total, 6 up, 6 in Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.898 280172 INFO nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Deleting instance files /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0_del#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.899 280172 INFO nova.virt.libvirt.driver [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Deletion of /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0_del complete#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.911 280172 DEBUG nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.912 280172 DEBUG nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.922 280172 DEBUG nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.923 280172 DEBUG nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.961 280172 DEBUG nova.virt.libvirt.host [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.962 280172 INFO nova.virt.libvirt.host [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] UEFI support detected#033[00m Nov 28 05:01:44 localhost nova_compute[280168]: 2025-11-28 10:01:44.965 280172 INFO nova.compute.manager [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Post operation of migration started#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.013 280172 INFO nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Deleted allocations for instance 7292509e-f294-4159-96e5-22d4712df2a0#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.048 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.048 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.081 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:45 localhost systemd[1]: tmp-crun.duZvx7.mount: Deactivated successfully. Nov 28 05:01:45 localhost dnsmasq[307696]: exiting on receipt of SIGTERM Nov 28 05:01:45 localhost podman[308361]: 2025-11-28 10:01:45.087594031 +0000 UTC m=+0.058622895 container kill f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:01:45 localhost systemd[1]: libpod-f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b.scope: Deactivated successfully. Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.104 280172 DEBUG oslo_concurrency.processutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.132 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquiring lock "refresh_cache-c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.133 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquired lock "refresh_cache-c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.133 280172 DEBUG nova.network.neutron [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 28 05:01:45 localhost podman[308375]: 2025-11-28 10:01:45.135928743 +0000 UTC m=+0.033388731 container died f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:01:45 localhost ovn_controller[152726]: 2025-11-28T10:01:45Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:68:3c 10.100.0.12 Nov 28 05:01:45 localhost ovn_controller[152726]: 2025-11-28T10:01:45Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:68:3c 10.100.0.12 Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.152 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.153 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11612MB free_disk=41.63758850097656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.153 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:45 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b-userdata-shm.mount: Deactivated successfully. Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.183 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[bead1f9f-fa92-427a-9841-94cc87ada8b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.183 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap492ef1de-41 in ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.186 261619 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap492ef1de-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.186 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[cb746268-d067-46da-916a-bd1bc3d25b5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.187 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4f7d92-aa60-41d5-9041-fa494ebb4ca8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:45 localhost podman[308375]: 2025-11-28 10:01:45.19035587 +0000 UTC m=+0.087815868 container remove f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8a9132e3-6bf5-4fa5-8eac-9650725d34b1, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:01:45 localhost systemd[1]: libpod-conmon-f0d54c45b79e93cce2c09d9f53a08f4697989faf756dae8aace02617f311ac7b.scope: Deactivated successfully. Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.205 158630 DEBUG oslo.privsep.daemon [-] privsep: reply[833479b1-9d56-4db1-855b-c3d01b023dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.213 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[4ccae5a1-8683-4fdb-8080-89edd82ea369]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.214 158530 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmplomwym1i/privsep.sock']#033[00m Nov 28 05:01:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:01:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/947106529' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.544 280172 DEBUG oslo_concurrency.processutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.550 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.638 280172 ERROR nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [req-f76c6dc6-4923-4c12-b438-1eaa6086d7c3] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID 72fba1ca-0d86-48af-8a3d-510284dfd0e0. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-f76c6dc6-4923-4c12-b438-1eaa6086d7c3"}]}#033[00m Nov 28 05:01:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 7.3 MiB/s rd, 7.2 MiB/s wr, 159 op/s Nov 28 05:01:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:45.732 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.787 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.808 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.809 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.821 158530 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.822 158530 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplomwym1i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.714 308430 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.722 308430 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.726 308430 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.726 308430 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308430#033[00m Nov 28 05:01:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:45.825 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[92f8ed77-fbcb-4c69-aee5-ddc4eb0ce7c5]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.829 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.867 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.936 280172 DEBUG oslo_concurrency.processutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:45 localhost nova_compute[280168]: 2025-11-28 10:01:45.965 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:46 localhost systemd[1]: var-lib-containers-storage-overlay-93a797d5722443d4145967b946d93624c8318af2c1cabbcd44f075fa585e6ab9-merged.mount: Deactivated successfully. Nov 28 05:01:46 localhost systemd[1]: run-netns-qdhcp\x2d8a9132e3\x2d6bf5\x2d4fa5\x2d8eac\x2d9650725d34b1.mount: Deactivated successfully. Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.165 280172 DEBUG nova.network.neutron [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Updating instance_info_cache with network_info: [{"id": "62b8533f-b250-4475-80c2-28c4543536b5", "address": "fa:16:3e:58:68:3c", "network": {"id": "ad2d8cf7-987d-4804-acbd-9b3e248dc8cd", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1085829019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "310745a04bd441169ff77f55ccf6bd7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62b8533f-b2", "ovs_interfaceid": "62b8533f-b250-4475-80c2-28c4543536b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:01:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:46.208 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.228 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Releasing lock "refresh_cache-c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.257 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.275 308430 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.275 308430 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.275 308430 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.401 280172 DEBUG oslo_concurrency.processutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.408 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.467 280172 ERROR nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [req-688da9a6-1005-4ef1-9e59-c08a93ab4f5d] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID 72fba1ca-0d86-48af-8a3d-510284dfd0e0. Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict ", "code": "placement.concurrent_update", "request_id": "req-688da9a6-1005-4ef1-9e59-c08a93ab4f5d"}]}#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.473 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Acquiring lock "7292509e-f294-4159-96e5-22d4712df2a0" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.494 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.516 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.517 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.541 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.588 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 05:01:46 localhost nova_compute[280168]: 2025-11-28 10:01:46.668 280172 DEBUG oslo_concurrency.processutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.782 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[56e0eb79-8330-40b6-ab1e-77c3852f4031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost NetworkManager[5965]: [1764324106.8097] manager: (tap492ef1de-40): new Veth device (/org/freedesktop/NetworkManager/Devices/18) Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.811 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[7c920fdb-7dcf-4131-ba2e-292292f51a1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost systemd-udevd[308464]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.847 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[85ccada5-b419-4d34-958f-3736153dc10f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.852 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[9a084e47-9187-4650-8940-db00218ff1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap492ef1de-41: link becomes ready Nov 28 05:01:46 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap492ef1de-40: link becomes ready Nov 28 05:01:46 localhost NetworkManager[5965]: [1764324106.8751] device (tap492ef1de-40): carrier: link connected Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.880 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[07945255-82b2-4e53-9c83-21d52075cb8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.904 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[d83545aa-84d1-4a29-9265-b2d190cfb15f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap492ef1de-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e3:7c:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1193102, 'reachable_time': 23541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308502, 'error': None, 'target': 'ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.928 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[035577c0-09b3-4ef4-816e-394ed29b54c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:7c76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1193102, 'tstamp': 1193102}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308503, 'error': None, 'target': 'ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.948 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccf56c1-1485-4a00-aac8-1c88da13afa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap492ef1de-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e3:7c:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1193102, 'reachable_time': 23541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308504, 'error': None, 'target': 'ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:46.984 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[6aaf7ddd-0ec7-4ef0-8abe-c5f5c2831656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.051 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[2713abf2-bf40-4190-8958-00159c98d8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.053 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap492ef1de-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.054 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.054 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap492ef1de-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.056 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:47 localhost kernel: device tap492ef1de-40 entered promiscuous mode Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.060 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.061 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap492ef1de-40, col_values=(('external_ids', {'iface-id': '6838a8cb-20d7-44c7-aad3-e7f442484bd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.062 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:47 localhost ovn_controller[152726]: 2025-11-28T10:01:47Z|00062|binding|INFO|Releasing lport 6838a8cb-20d7-44c7-aad3-e7f442484bd5 from this chassis (sb_readonly=0) Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.076 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.078 158530 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/492ef1de-4a68-49e4-b736-13cdb2eb7b59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/492ef1de-4a68-49e4-b736-13cdb2eb7b59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.079 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[bde4614e-c7d4-440a-8c95-443938892fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.080 158530 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: global Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: log /dev/log local0 debug Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: log-tag haproxy-metadata-proxy-492ef1de-4a68-49e4-b736-13cdb2eb7b59 Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: user root Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: group root Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: maxconn 1024 Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: pidfile /var/lib/neutron/external/pids/492ef1de-4a68-49e4-b736-13cdb2eb7b59.pid.haproxy Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: daemon Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: defaults Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: log global Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: mode http Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: option httplog Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: option dontlognull Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: option http-server-close Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: option forwardfor Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: retries 3 Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: timeout http-request 30s Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: timeout connect 30s Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: timeout client 32s Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: timeout server 32s Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: timeout http-keep-alive 30s Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: listen listener Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: bind 169.254.169.254:80 Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: server metadata /var/lib/neutron/metadata_proxy Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: http-request add-header X-OVN-Network-ID 492ef1de-4a68-49e4-b736-13cdb2eb7b59 Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.081 158530 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'env', 'PROCESS_TAG=haproxy-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/492ef1de-4a68-49e4-b736-13cdb2eb7b59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.211 280172 DEBUG oslo_concurrency.processutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.221 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:01:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:47.237 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:46Z, description=, device_id=df7249ed-b97e-4670-9db6-41014e05ccf7, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=db944860-27cb-4e81-92f6-1a891644e35c, ip_allocation=immediate, mac_address=fa:16:3e:76:13:cc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=723, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:01:47Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.289 280172 DEBUG nova.scheduler.client.report [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updated inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with generation 6 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.290 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 generation from 6 to 7 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.290 280172 DEBUG nova.compute.provider_tree [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.316 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.319 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 2.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.394 280172 DEBUG oslo_concurrency.lockutils [None req-9dd5035d-70c3-4cd3-a7df-eb80858bea87 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 35.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.397 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" acquired by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: waited 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.397 280172 INFO nova.compute.manager [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Unshelving#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.411 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Migration for instance c06e2ffc-a8af-41b6-ab88-680ef1f6fe50 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.435 280172 INFO nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Updating resource usage from migration 62fb7f70-bf44-4fcf-8c08-e096ee66cd99#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.436 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Starting to track incoming migration 62fb7f70-bf44-4fcf-8c08-e096ee66cd99 with flavor 98f289d4-5c06-4ab5-9089-7b580870d676 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m Nov 28 05:01:47 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:01:47 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:47 localhost podman[308543]: 2025-11-28 10:01:47.503171877 +0000 UTC m=+0.111317709 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:01:47 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.502 280172 WARNING nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Instance c06e2ffc-a8af-41b6-ab88-680ef1f6fe50 has been moved to another host np0005538513.localdomain(np0005538513.localdomain). There are allocations remaining against the source host that might need to be removed: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}.#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.504 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance with task_state "unshelving" is not being actively managed by this compute host but has allocations referencing this compute node (72fba1ca-0d86-48af-8a3d-510284dfd0e0): {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocations during the task state transition. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1708#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.504 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.505 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.523 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:47 localhost podman[308564]: Nov 28 05:01:47 localhost nova_compute[280168]: 2025-11-28 10:01:47.569 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:47 localhost podman[308564]: 2025-11-28 10:01:47.575347782 +0000 UTC m=+0.121416616 container create 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:01:47 localhost systemd[1]: Started libpod-conmon-2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65.scope. Nov 28 05:01:47 localhost podman[308564]: 2025-11-28 10:01:47.540481906 +0000 UTC m=+0.086550790 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 28 05:01:47 localhost systemd[1]: Started libcrun container. Nov 28 05:01:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85df380a7f1d8aaeea84d1e7053bb33d88e99eb8caf1cca6548cd15c84faacda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:01:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 304 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 6.4 MiB/s rd, 9.0 MiB/s wr, 264 op/s Nov 28 05:01:47 localhost podman[308564]: 2025-11-28 10:01:47.662293483 +0000 UTC m=+0.208362357 container init 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:01:47 localhost podman[308564]: 2025-11-28 10:01:47.672095439 +0000 UTC m=+0.218164303 container start 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 05:01:47 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [NOTICE] (308591) : New worker (308611) forked Nov 28 05:01:47 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [NOTICE] (308591) : Loading success. Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.735 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 62b8533f-b250-4475-80c2-28c4543536b5 in datapath ad2d8cf7-987d-4804-acbd-9b3e248dc8cd unbound from our chassis#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.738 158530 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ad2d8cf7-987d-4804-acbd-9b3e248dc8cd#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.747 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c449ad3c-2fec-4028-bf3e-65252d6509da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.748 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapad2d8cf7-91 in ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.750 261619 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapad2d8cf7-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.750 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa07c62-38cd-490c-a67a-9de416fcd981]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.751 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[313e942c-22f7-4a6c-a365-65ac87e0d2d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.767 158630 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f496e9-2b44-45f4-ab92-fa6fdf5f0312]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.778 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[075220ec-5554-48ad-9989-286c411bdd87]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:47.783 261346 INFO neutron.agent.dhcp.agent [None req-30ac3a63-3ad5-4a5c-9676-d34d9c100bb7 - - - - - -] DHCP configuration for ports {'db944860-27cb-4e81-92f6-1a891644e35c'} is completed#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.799 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[adfc7d8d-494d-407c-bc5e-59134f575d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost NetworkManager[5965]: [1764324107.8076] manager: (tapad2d8cf7-90): new Veth device (/org/freedesktop/NetworkManager/Devices/19) Nov 28 05:01:47 localhost systemd-udevd[308481]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.806 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa119cd-a449-4c35-8a77-b949a8258341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.835 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[742fee01-a888-45b5-bc1c-ee6ab5b0f550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.838 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3c2685-bf5d-492a-a91e-1033e96bdee0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost NetworkManager[5965]: [1764324107.8570] device (tapad2d8cf7-90): carrier: link connected Nov 28 05:01:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapad2d8cf7-91: link becomes ready Nov 28 05:01:47 localhost kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapad2d8cf7-90: link becomes ready Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.861 308430 DEBUG oslo.privsep.daemon [-] privsep: reply[93ff9bf1-2e52-4bc9-b131-6025bfdd5676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.877 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[922fdef3-3ef0-4cb8-a878-06bfe884346c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad2d8cf7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:14:78:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1193200, 'reachable_time': 18479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308632, 'error': None, 'target': 'ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.892 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[86705772-a996-41aa-8ce3-f934d71c541b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe14:785b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1193200, 'tstamp': 1193200}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308633, 'error': None, 'target': 'ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.908 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[911a23a8-2ef5-427f-a395-a8bdf864f4c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapad2d8cf7-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:14:78:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1193200, 'reachable_time': 18479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308634, 'error': None, 'target': 'ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.936 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[731e95e6-73a0-4a74-af6e-0dc40af5ff9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.991 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[eefb38c4-f45d-43a0-b630-bac67bad1b24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.993 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad2d8cf7-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.993 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m Nov 28 05:01:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:47.994 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad2d8cf7-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:01:48 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1206230784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.030 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:48 localhost kernel: device tapad2d8cf7-90 entered promiscuous mode Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.039 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:48.039 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapad2d8cf7-90, col_values=(('external_ids', {'iface-id': 'acd4bbc3-c7c4-47d8-b58b-29abee48b714'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:48 localhost ovn_controller[152726]: 2025-11-28T10:01:48Z|00063|binding|INFO|Releasing lport acd4bbc3-c7c4-47d8-b58b-29abee48b714 from this chassis (sb_readonly=0) Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.052 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.054 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:48.055 158530 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ad2d8cf7-987d-4804-acbd-9b3e248dc8cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ad2d8cf7-987d-4804-acbd-9b3e248dc8cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:48.058 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[016208af-c7d7-4a39-8155-b9ad71b8a951]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:48.059 158530 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: global Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: log /dev/log local0 debug Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: log-tag haproxy-metadata-proxy-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: user root Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: group root Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: maxconn 1024 Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: pidfile /var/lib/neutron/external/pids/ad2d8cf7-987d-4804-acbd-9b3e248dc8cd.pid.haproxy Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: daemon Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: defaults Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: log global Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: mode http Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: option httplog Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: option dontlognull Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: option http-server-close Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: option forwardfor Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: retries 3 Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: timeout http-request 30s Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: timeout connect 30s Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: timeout client 32s Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: timeout server 32s Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: timeout http-keep-alive 30s Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: listen listener Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: bind 169.254.169.254:80 Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: server metadata /var/lib/neutron/metadata_proxy Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: http-request add-header X-OVN-Network-ID ad2d8cf7-987d-4804-acbd-9b3e248dc8cd Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m Nov 28 05:01:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:48.061 158530 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'env', 'PROCESS_TAG=haproxy-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ad2d8cf7-987d-4804-acbd-9b3e248dc8cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.062 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.083 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.109 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.110 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.111 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 1.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.111 280172 DEBUG oslo_concurrency.lockutils [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.112 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.120 280172 INFO nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.121 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:48 localhost journal[227736]: Domain id=2 name='instance-00000008' uuid=c06e2ffc-a8af-41b6-ab88-680ef1f6fe50 is tainted: custom-monitor Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.136 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.149 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.150 280172 INFO nova.compute.claims [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Claim successful on node np0005538515.localdomain#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.251 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.330 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.408 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:48 localhost podman[308667]: Nov 28 05:01:48 localhost podman[308667]: 2025-11-28 10:01:48.471885921 +0000 UTC m=+0.105184404 container create b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:01:48 localhost systemd[1]: Started libpod-conmon-b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9.scope. Nov 28 05:01:48 localhost podman[308667]: 2025-11-28 10:01:48.427947121 +0000 UTC m=+0.061245594 image pull quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified Nov 28 05:01:48 localhost systemd[1]: Started libcrun container. Nov 28 05:01:48 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a768f3cd9f334ca2043e41bc50d9940a576b305a62ce3d0d03be668021f18a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:01:48 localhost podman[308667]: 2025-11-28 10:01:48.546374075 +0000 UTC m=+0.179672488 container init b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:01:48 localhost systemd[1]: tmp-crun.j9tA6D.mount: Deactivated successfully. Nov 28 05:01:48 localhost podman[308667]: 2025-11-28 10:01:48.56538382 +0000 UTC m=+0.198682263 container start b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:48 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [NOTICE] (308705) : New worker (308707) forked Nov 28 05:01:48 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [NOTICE] (308705) : Loading success. Nov 28 05:01:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:01:48 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3121132784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.792 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.798 280172 DEBUG nova.compute.provider_tree [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.855 280172 DEBUG nova.scheduler.client.report [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.890 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.933 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Acquiring lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.933 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Acquired lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.934 280172 DEBUG nova.network.neutron [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 28 05:01:48 localhost nova_compute[280168]: 2025-11-28 10:01:48.967 280172 DEBUG nova.network.neutron [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.109 280172 DEBUG nova.network.neutron [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.112 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.128 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Releasing lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.130 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.131 280172 INFO nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Creating image(s)#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.166 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.170 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.172 280172 INFO nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.231 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.268 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.272 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Acquiring lock "f96cefa575d4e71026b7a689ed51daf234dda618" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.274 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "f96cefa575d4e71026b7a689ed51daf234dda618" acquired by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.327 280172 DEBUG nova.virt.libvirt.imagebackend [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Image locations are: [{'url': 'rbd://2c5417c9-00eb-57d5-a565-ddecbc7995c1/images/c045142b-5f2b-4f4d-80b7-ca5ee791067d/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://2c5417c9-00eb-57d5-a565-ddecbc7995c1/images/c045142b-5f2b-4f4d-80b7-ca5ee791067d/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.406 280172 DEBUG nova.virt.libvirt.imagebackend [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Selected location: {'url': 'rbd://2c5417c9-00eb-57d5-a565-ddecbc7995c1/images/c045142b-5f2b-4f4d-80b7-ca5ee791067d/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.407 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] cloning images/c045142b-5f2b-4f4d-80b7-ca5ee791067d@snap to None/7292509e-f294-4159-96e5-22d4712df2a0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.546 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "f96cefa575d4e71026b7a689ed51daf234dda618" "released" by "nova.virt.libvirt.imagebackend.Image.cache..fetch_func_sync" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 304 MiB data, 1012 MiB used, 41 GiB / 42 GiB avail; 5.4 MiB/s rd, 7.6 MiB/s wr, 224 op/s Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.706 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'migration_context' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:49 localhost nova_compute[280168]: 2025-11-28 10:01:49.775 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] flattening vms/7292509e-f294-4159-96e5-22d4712df2a0_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Nov 28 05:01:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:01:49 localhost systemd[1]: tmp-crun.evCG8p.mount: Deactivated successfully. Nov 28 05:01:49 localhost podman[308932]: 2025-11-28 10:01:49.995729534 +0000 UTC m=+0.099826033 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, build-date=2025-08-20T13:12:41) Nov 28 05:01:50 localhost podman[308932]: 2025-11-28 10:01:50.010393827 +0000 UTC m=+0.114490366 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.) Nov 28 05:01:50 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.178 280172 INFO nova.virt.libvirt.driver [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.184 280172 DEBUG nova.compute.manager [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.206 280172 DEBUG nova.objects.instance [None req-d01cd8c7-9a11-4b90-a050-431eb8c7d5f5 13de913d83d648919edb1da3601d106c f1bee3918a2345388c202f74e60af9c5 - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.591 280172 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.592 280172 INFO nova.compute.manager [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] VM Stopped (Lifecycle Event)#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.617 280172 DEBUG nova.compute.manager [None req-ce46f1b3-617e-478c-a97e-8ffeffe5d69b - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.738 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Image rbd:vms/7292509e-f294-4159-96e5-22d4712df2a0_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.739 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.740 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Ensure instance console log exists: /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.740 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.741 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.741 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.743 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-28T10:01:11Z,direct_url=,disk_format='raw',id=c045142b-5f2b-4f4d-80b7-ca5ee791067d,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-650509197-shelved',owner='a30386ba68ee46f4a1bac43cf415f3a4',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-11-28T10:01:41Z,virtual_size=,visibility=) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'size': 0, 'encryption_options': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '85968a96-5a0e-43a4-9c04-3954f640a7ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.749 280172 WARNING nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.751 280172 DEBUG nova.virt.libvirt.host [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Searching host: 'np0005538515.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.752 280172 DEBUG nova.virt.libvirt.host [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.753 280172 DEBUG nova.virt.libvirt.host [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Searching host: 'np0005538515.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.754 280172 DEBUG nova.virt.libvirt.host [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.755 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.756 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-28T09:59:43Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='98f289d4-5c06-4ab5-9089-7b580870d676',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-28T10:01:11Z,direct_url=,disk_format='raw',id=c045142b-5f2b-4f4d-80b7-ca5ee791067d,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-650509197-shelved',owner='a30386ba68ee46f4a1bac43cf415f3a4',properties=ImageMetaProps,protected=,size=1073741824,status='active',tags=,updated_at=2025-11-28T10:01:41Z,virtual_size=,visibility=), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.756 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.757 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.757 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.758 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.758 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.759 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.759 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.760 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.760 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.760 280172 DEBUG nova.virt.hardware [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.761 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.776 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:50.845 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:50.846 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:50.847 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:50 localhost systemd[1]: Stopping User Manager for UID 42436... Nov 28 05:01:50 localhost systemd[308158]: Activating special unit Exit the Session... Nov 28 05:01:50 localhost systemd[308158]: Stopped target Main User Target. Nov 28 05:01:50 localhost systemd[308158]: Stopped target Basic System. Nov 28 05:01:50 localhost systemd[308158]: Stopped target Paths. Nov 28 05:01:50 localhost systemd[308158]: Stopped target Sockets. Nov 28 05:01:50 localhost systemd[308158]: Stopped target Timers. Nov 28 05:01:50 localhost systemd[308158]: Stopped Mark boot as successful after the user session has run 2 minutes. Nov 28 05:01:50 localhost systemd[308158]: Stopped Daily Cleanup of User's Temporary Directories. Nov 28 05:01:50 localhost systemd[308158]: Closed D-Bus User Message Bus Socket. Nov 28 05:01:50 localhost systemd[308158]: Stopped Create User's Volatile Files and Directories. Nov 28 05:01:50 localhost systemd[308158]: Removed slice User Application Slice. Nov 28 05:01:50 localhost systemd[308158]: Reached target Shutdown. Nov 28 05:01:50 localhost systemd[308158]: Finished Exit the Session. Nov 28 05:01:50 localhost systemd[308158]: Reached target Exit the Session. Nov 28 05:01:50 localhost systemd[1]: user@42436.service: Deactivated successfully. Nov 28 05:01:50 localhost systemd[1]: Stopped User Manager for UID 42436. Nov 28 05:01:50 localhost systemd[1]: Stopping User Runtime Directory /run/user/42436... Nov 28 05:01:50 localhost systemd[1]: run-user-42436.mount: Deactivated successfully. Nov 28 05:01:50 localhost systemd[1]: user-runtime-dir@42436.service: Deactivated successfully. Nov 28 05:01:50 localhost systemd[1]: Stopped User Runtime Directory /run/user/42436. Nov 28 05:01:50 localhost systemd[1]: Removed slice User Slice of UID 42436. Nov 28 05:01:50 localhost nova_compute[280168]: 2025-11-28 10:01:50.968 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:01:51 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1415036771' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.273 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.311 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.316 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v125: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 200 op/s Nov 28 05:01:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:01:51 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2304873447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.736 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.741 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.758 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] End _get_guest_xml xml= Nov 28 05:01:51 localhost nova_compute[280168]: 7292509e-f294-4159-96e5-22d4712df2a0 Nov 28 05:01:51 localhost nova_compute[280168]: instance-00000007 Nov 28 05:01:51 localhost nova_compute[280168]: 131072 Nov 28 05:01:51 localhost nova_compute[280168]: 1 Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: tempest-UnshelveToHostMultiNodesTest-server-650509197 Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:50 Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: 128 Nov 28 05:01:51 localhost nova_compute[280168]: 1 Nov 28 05:01:51 localhost nova_compute[280168]: 0 Nov 28 05:01:51 localhost nova_compute[280168]: 0 Nov 28 05:01:51 localhost nova_compute[280168]: 1 Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: tempest-UnshelveToHostMultiNodesTest-426973173-project-member Nov 28 05:01:51 localhost nova_compute[280168]: tempest-UnshelveToHostMultiNodesTest-426973173 Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: RDO Nov 28 05:01:51 localhost nova_compute[280168]: OpenStack Compute Nov 28 05:01:51 localhost nova_compute[280168]: 27.5.2-0.20250829104910.6f8decf.el9 Nov 28 05:01:51 localhost nova_compute[280168]: 7292509e-f294-4159-96e5-22d4712df2a0 Nov 28 05:01:51 localhost nova_compute[280168]: 7292509e-f294-4159-96e5-22d4712df2a0 Nov 28 05:01:51 localhost nova_compute[280168]: Virtual Machine Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: hvm Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: /dev/urandom Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: Nov 28 05:01:51 localhost nova_compute[280168]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.809 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.810 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.811 280172 INFO nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Using config drive#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.851 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.870 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.912 280172 DEBUG nova.objects.instance [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lazy-loading 'keypairs' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.983 280172 INFO nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Creating config drive at /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config#033[00m Nov 28 05:01:51 localhost nova_compute[280168]: 2025-11-28 10:01:51.989 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpox96wfi0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.039 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Acquiring lock "c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.040 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lock "c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" acquired by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.041 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Acquiring lock "c06e2ffc-a8af-41b6-ab88-680ef1f6fe50-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.041 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lock "c06e2ffc-a8af-41b6-ab88-680ef1f6fe50-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.041 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lock "c06e2ffc-a8af-41b6-ab88-680ef1f6fe50-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.043 280172 INFO nova.compute.manager [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Terminating instance#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.045 280172 DEBUG nova.compute.manager [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.119 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpox96wfi0" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:52 localhost kernel: device tap62b8533f-b2 left promiscuous mode Nov 28 05:01:52 localhost NetworkManager[5965]: [1764324112.1426] device (tap62b8533f-b2): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed') Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.189 280172 DEBUG nova.storage.rbd_utils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] rbd image 7292509e-f294-4159-96e5-22d4712df2a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00064|binding|INFO|Releasing lport 62b8533f-b250-4475-80c2-28c4543536b5 from this chassis (sb_readonly=0) Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00065|binding|INFO|Setting lport 62b8533f-b250-4475-80c2-28c4543536b5 down in Southbound Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00066|binding|INFO|Releasing lport fc82099a-3702-4952-add7-ba3d39b895a0 from this chassis (sb_readonly=0) Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00067|binding|INFO|Setting lport fc82099a-3702-4952-add7-ba3d39b895a0 down in Southbound Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00068|binding|INFO|Removing iface tap62b8533f-b2 ovn-installed in OVS Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00069|binding|INFO|Releasing lport 6838a8cb-20d7-44c7-aad3-e7f442484bd5 from this chassis (sb_readonly=0) Nov 28 05:01:52 localhost ovn_controller[152726]: 2025-11-28T10:01:52Z|00070|binding|INFO|Releasing lport acd4bbc3-c7c4-47d8-b58b-29abee48b714 from this chassis (sb_readonly=0) Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.205 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config 7292509e-f294-4159-96e5-22d4712df2a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.207 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:3c:a8 19.80.0.139'], port_security=['fa:16:3e:41:3c:a8 19.80.0.139'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['62b8533f-b250-4475-80c2-28c4543536b5'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-957922340', 'neutron:cidrs': '19.80.0.139/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-957922340', 'neutron:project_id': '310745a04bd441169ff77f55ccf6bd7b', 'neutron:revision_number': '5', 'neutron:security_group_ids': '8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=96ffb618-d617-4e8c-a498-acb365ae5313, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=fc82099a-3702-4952-add7-ba3d39b895a0) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.212 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:68:3c 10.100.0.12'], port_security=['fa:16:3e:58:68:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-191355626', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'c06e2ffc-a8af-41b6-ab88-680ef1f6fe50', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-191355626', 'neutron:project_id': '310745a04bd441169ff77f55ccf6bd7b', 'neutron:revision_number': '11', 'neutron:security_group_ids': '8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538513.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b393f93f-1891-43a2-aa26-a4cab2642f74, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=62b8533f-b250-4475-80c2-28c4543536b5) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.215 158530 INFO neutron.agent.ovn.metadata.agent [-] Port fc82099a-3702-4952-add7-ba3d39b895a0 in datapath 492ef1de-4a68-49e4-b736-13cdb2eb7b59 unbound from our chassis#033[00m Nov 28 05:01:52 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Deactivated successfully. Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.221 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000008.scope: Consumed 3.943s CPU time. Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.226 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 492ef1de-4a68-49e4-b736-13cdb2eb7b59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:01:52 localhost systemd-machined[201641]: Machine qemu-2-instance-00000008 terminated. Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.228 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[709d5b21-0381-489d-b2c4-19a970c47e4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.229 158530 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59 namespace which is not needed anymore#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.232 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.246 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost NetworkManager[5965]: [1764324112.2611] manager: (tap62b8533f-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/20) Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.282 280172 INFO nova.virt.libvirt.driver [-] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Instance destroyed successfully.#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.283 280172 DEBUG nova.objects.instance [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lazy-loading 'resources' on Instance uuid c06e2ffc-a8af-41b6-ab88-680ef1f6fe50 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.297 280172 DEBUG nova.virt.libvirt.vif [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-11-28T10:01:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-915340611',display_name='tempest-LiveMigrationTest-server-915340611',ec2_ids=,ephemeral_gb=0,ephemeral_key_uuid=None,fault=,flavor=Flavor(5),hidden=False,host='np0005538515.localdomain',hostname='tempest-livemigrationtest-server-915340611',id=8,image_ref='85968a96-5a0e-43a4-9c04-3954f640a7ed',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=,launch_index=0,launched_at=2025-11-28T10:01:29Z,launched_on='np0005538513.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=,new_flavor=None,node='np0005538515.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=,pci_requests=,power_state=1,progress=0,project_id='310745a04bd441169ff77f55ccf6bd7b',ramdisk_id='',reservation_id='r-1g332w05',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='85968a96-5a0e-43a4-9c04-3954f640a7ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-480152442',owner_user_name='tempest-LiveMigrationTest-480152442-project-member'},tags=,task_state='deleting',terminated_at=None,trusted_certs=,updated_at=2025-11-28T10:01:50Z,user_data=None,user_id='318114281cb649bc9eeed12ecdc7273f',uuid=c06e2ffc-a8af-41b6-ab88-680ef1f6fe50,vcpu_model=,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62b8533f-b250-4475-80c2-28c4543536b5", "address": "fa:16:3e:58:68:3c", "network": {"id": "ad2d8cf7-987d-4804-acbd-9b3e248dc8cd", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1085829019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "310745a04bd441169ff77f55ccf6bd7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62b8533f-b2", "ovs_interfaceid": "62b8533f-b250-4475-80c2-28c4543536b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.298 280172 DEBUG nova.network.os_vif_util [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Converting VIF {"id": "62b8533f-b250-4475-80c2-28c4543536b5", "address": "fa:16:3e:58:68:3c", "network": {"id": "ad2d8cf7-987d-4804-acbd-9b3e248dc8cd", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1085829019-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "310745a04bd441169ff77f55ccf6bd7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62b8533f-b2", "ovs_interfaceid": "62b8533f-b250-4475-80c2-28c4543536b5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.300 280172 DEBUG nova.network.os_vif_util [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:68:3c,bridge_name='br-int',has_traffic_filtering=True,id=62b8533f-b250-4475-80c2-28c4543536b5,network=Network(ad2d8cf7-987d-4804-acbd-9b3e248dc8cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62b8533f-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.300 280172 DEBUG os_vif [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:68:3c,bridge_name='br-int',has_traffic_filtering=True,id=62b8533f-b250-4475-80c2-28c4543536b5,network=Network(ad2d8cf7-987d-4804-acbd-9b3e248dc8cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62b8533f-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.303 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.303 280172 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62b8533f-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.305 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.307 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.311 280172 INFO os_vif [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:68:3c,bridge_name='br-int',has_traffic_filtering=True,id=62b8533f-b250-4475-80c2-28c4543536b5,network=Network(ad2d8cf7-987d-4804-acbd-9b3e248dc8cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap62b8533f-b2')#033[00m Nov 28 05:01:52 localhost systemd[1]: tmp-crun.GzoSAL.mount: Deactivated successfully. Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [NOTICE] (308591) : haproxy version is 2.8.14-c23fe91 Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [NOTICE] (308591) : path to executable is /usr/sbin/haproxy Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [WARNING] (308591) : Exiting Master process... Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [WARNING] (308591) : Exiting Master process... Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [ALERT] (308591) : Current worker (308611) exited with code 143 (Terminated) Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59[308585]: [WARNING] (308591) : All workers exited. Exiting... (0) Nov 28 05:01:52 localhost systemd[1]: libpod-2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65.scope: Deactivated successfully. Nov 28 05:01:52 localhost podman[309124]: 2025-11-28 10:01:52.408794434 +0000 UTC m=+0.062557644 container died 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.415 280172 DEBUG oslo_concurrency.processutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config 7292509e-f294-4159-96e5-22d4712df2a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.415 280172 INFO nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Deleting local config drive /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0/disk.config because it was imported into RBD.#033[00m Nov 28 05:01:52 localhost podman[309124]: 2025-11-28 10:01:52.442486163 +0000 UTC m=+0.096249353 container cleanup 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:01:52 localhost podman[309144]: 2025-11-28 10:01:52.472279345 +0000 UTC m=+0.059026857 container cleanup 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:52 localhost systemd[1]: libpod-conmon-2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65.scope: Deactivated successfully. Nov 28 05:01:52 localhost systemd-machined[201641]: New machine qemu-3-instance-00000007. Nov 28 05:01:52 localhost systemd[1]: Started Virtual Machine qemu-3-instance-00000007. Nov 28 05:01:52 localhost podman[309161]: 2025-11-28 10:01:52.547167721 +0000 UTC m=+0.077309310 container remove 2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.556 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d446a8-0b2b-44b4-b3ee-33e569acb7be]: (4, ('Fri Nov 28 10:01:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59 (2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65)\n2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65\nFri Nov 28 10:01:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59 (2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65)\n2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.559 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[36a2f72a-c5f8-4f2a-886c-b0ad457aa943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.560 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap492ef1de-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.562 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.572 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost kernel: device tap492ef1de-40 left promiscuous mode Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.575 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.580 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[eade3d9f-ed08-4f4a-b76b-9de1d857e182]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.593 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[35bed441-6c5c-4beb-a871-85ee05007d01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.594 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[fecfebec-4848-409d-b1ef-6c30351df2d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.611 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[2085ad7c-1e33-4ff4-9cb2-c4c128d41dd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1193092, 'reachable_time': 18342, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309194, 'error': None, 'target': 'ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.621 158630 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-492ef1de-4a68-49e4-b736-13cdb2eb7b59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.622 158630 DEBUG oslo.privsep.daemon [-] privsep: reply[da9db098-6708-4a0f-9d00-0149d8a190f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.624 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 62b8533f-b250-4475-80c2-28c4543536b5 in datapath ad2d8cf7-987d-4804-acbd-9b3e248dc8cd unbound from our chassis#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.627 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.628 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[420b00ed-f076-4c45-acdf-bbb9f919d125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.629 158530 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd namespace which is not needed anymore#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.811 280172 DEBUG nova.virt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Emitting event Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.812 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] VM Resumed (Lifecycle Event)#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.815 280172 DEBUG nova.compute.manager [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance event wait completed in 0 seconds for wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.816 280172 DEBUG nova.virt.libvirt.driver [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.820 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance spawned successfully.#033[00m Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [NOTICE] (308705) : haproxy version is 2.8.14-c23fe91 Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [NOTICE] (308705) : path to executable is /usr/sbin/haproxy Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [WARNING] (308705) : Exiting Master process... Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [WARNING] (308705) : Exiting Master process... Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [ALERT] (308705) : Current worker (308707) exited with code 143 (Terminated) Nov 28 05:01:52 localhost neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd[308699]: [WARNING] (308705) : All workers exited. Exiting... (0) Nov 28 05:01:52 localhost systemd[1]: libpod-b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9.scope: Deactivated successfully. Nov 28 05:01:52 localhost podman[309253]: 2025-11-28 10:01:52.833861987 +0000 UTC m=+0.083604112 container died b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.849 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.857 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 28 05:01:52 localhost podman[309253]: 2025-11-28 10:01:52.884120387 +0000 UTC m=+0.133862572 container cleanup b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.890 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.891 280172 DEBUG nova.virt.driver [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] Emitting event Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.891 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] VM Started (Lifecycle Event)#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.912 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.916 280172 DEBUG nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.939 280172 INFO nova.virt.libvirt.driver [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Deleting instance files /var/lib/nova/instances/c06e2ffc-a8af-41b6-ab88-680ef1f6fe50_del#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.940 280172 INFO nova.virt.libvirt.driver [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Deletion of /var/lib/nova/instances/c06e2ffc-a8af-41b6-ab88-680ef1f6fe50_del complete#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.942 280172 INFO nova.compute.manager [None req-1a57df67-0f32-4e7a-ae01-b3a64cdd1e9a - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m Nov 28 05:01:52 localhost podman[309268]: 2025-11-28 10:01:52.952473315 +0000 UTC m=+0.114868686 container cleanup b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 28 05:01:52 localhost systemd[1]: libpod-conmon-b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9.scope: Deactivated successfully. Nov 28 05:01:52 localhost podman[309283]: 2025-11-28 10:01:52.980607247 +0000 UTC m=+0.072985630 container remove b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.984 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[5d2fd949-ad34-4724-aeab-8b08e11f9de1]: (4, ('Fri Nov 28 10:01:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd (b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9)\nb83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9\nFri Nov 28 10:01:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd (b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9)\nb83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.985 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[97f156a8-0150-4d12-9211-f547b838e017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.986 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad2d8cf7-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.987 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost kernel: device tapad2d8cf7-90 left promiscuous mode Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.989 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:52.991 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[fb48d62e-d90b-4819-88c5-ce1005deba62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:52 localhost nova_compute[280168]: 2025-11-28 10:01:52.995 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:53 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:53.009 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[41c09b17-15ac-49ff-b8cf-3190024d680c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:53 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:53.010 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[e847cb6e-d5dc-4d99-a5e9-8861d2961a46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:53 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:53.022 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[a9acf74d-b58b-4611-b574-bd51a1764b87]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1193194, 'reachable_time': 44107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309302, 'error': None, 'target': 'ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:53 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:53.024 158630 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ad2d8cf7-987d-4804-acbd-9b3e248dc8cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m Nov 28 05:01:53 localhost ovn_metadata_agent[158525]: 2025-11-28 10:01:53.024 158630 DEBUG oslo.privsep.daemon [-] privsep: reply[a4e78a62-09a3-4129-b1e0-07877d696ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:01:53 localhost nova_compute[280168]: 2025-11-28 10:01:53.033 280172 INFO nova.compute.manager [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m Nov 28 05:01:53 localhost nova_compute[280168]: 2025-11-28 10:01:53.033 280172 DEBUG oslo.service.loopingcall [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m Nov 28 05:01:53 localhost nova_compute[280168]: 2025-11-28 10:01:53.034 280172 DEBUG nova.compute.manager [-] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m Nov 28 05:01:53 localhost nova_compute[280168]: 2025-11-28 10:01:53.034 280172 DEBUG nova.network.neutron [-] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m Nov 28 05:01:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:53.062 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:52Z, description=, device_id=ff13b2c3-ffbb-486b-ba3a-fa0f2960342d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=57c70dff-855f-436b-a33c-5f3b79153011, ip_allocation=immediate, mac_address=fa:16:3e:58:c4:db, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=756, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:01:52Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:01:53 localhost nova_compute[280168]: 2025-11-28 10:01:53.293 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:53 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:01:53 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:01:53 localhost podman[309318]: 2025-11-28 10:01:53.295435054 +0000 UTC m=+0.091599523 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:01:53 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:01:53 localhost systemd[1]: tmp-crun.6XLSz6.mount: Deactivated successfully. Nov 28 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay-0a768f3cd9f334ca2043e41bc50d9940a576b305a62ce3d0d03be668021f18a6-merged.mount: Deactivated successfully. Nov 28 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b83b95d2190600c1dbe7db51200edb3cc9a4c1a1dce37a308e98f94a4ead0bc9-userdata-shm.mount: Deactivated successfully. Nov 28 05:01:53 localhost systemd[1]: run-netns-ovnmeta\x2dad2d8cf7\x2d987d\x2d4804\x2dacbd\x2d9b3e248dc8cd.mount: Deactivated successfully. Nov 28 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay-85df380a7f1d8aaeea84d1e7053bb33d88e99eb8caf1cca6548cd15c84faacda-merged.mount: Deactivated successfully. Nov 28 05:01:53 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a677fb10ee4b5d09cc54f10906d47c5af3cbfa4d5a75fef5f8a83ebbe812c65-userdata-shm.mount: Deactivated successfully. Nov 28 05:01:53 localhost systemd[1]: run-netns-ovnmeta\x2d492ef1de\x2d4a68\x2d49e4\x2db736\x2d13cdb2eb7b59.mount: Deactivated successfully. Nov 28 05:01:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e102 e102: 6 total, 6 up, 6 in Nov 28 05:01:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:53.559 261346 INFO neutron.agent.dhcp.agent [None req-4bc49313-c594-4a9c-9251-e1fc93d41348 - - - - - -] DHCP configuration for ports {'57c70dff-855f-436b-a33c-5f3b79153011'} is completed#033[00m Nov 28 05:01:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.8 MiB/s rd, 8.2 MiB/s wr, 228 op/s Nov 28 05:01:54 localhost nova_compute[280168]: 2025-11-28 10:01:54.719 280172 DEBUG nova.compute.manager [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:01:54 localhost nova_compute[280168]: 2025-11-28 10:01:54.734 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:54 localhost nova_compute[280168]: 2025-11-28 10:01:54.876 280172 DEBUG oslo_concurrency.lockutils [None req-9b118e69-1b4e-4a80-93b6-9ea2d37c0d65 734ca2d323084815a7ec954d4e95f7c1 cee9624834b94e80a7bbe35b2f1a1739 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" "released" by "nova.compute.manager.ComputeManager.unshelve_instance..do_unshelve_instance" :: held 7.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.369 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 200 op/s Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.742 280172 DEBUG nova.network.neutron [-] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.758 280172 INFO nova.compute.manager [-] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Took 2.72 seconds to deallocate network for instance.#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.822 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.822 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.825 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.864 280172 INFO nova.scheduler.client.report [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Deleted allocations for instance c06e2ffc-a8af-41b6-ab88-680ef1f6fe50#033[00m Nov 28 05:01:55 localhost nova_compute[280168]: 2025-11-28 10:01:55.951 280172 DEBUG oslo_concurrency.lockutils [None req-f01523fe-fe44-4637-97d1-2ed008e509d0 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Lock "c06e2ffc-a8af-41b6-ab88-680ef1f6fe50" "released" by "nova.compute.manager.ComputeManager.terminate_instance..do_terminate_instance" :: held 3.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:01:56 localhost nova_compute[280168]: 2025-11-28 10:01:56.304 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "7292509e-f294-4159-96e5-22d4712df2a0" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:01:56 localhost nova_compute[280168]: 2025-11-28 10:01:56.305 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" acquired by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:01:56 localhost nova_compute[280168]: 2025-11-28 10:01:56.305 280172 INFO nova.compute.manager [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Shelving#033[00m Nov 28 05:01:56 localhost nova_compute[280168]: 2025-11-28 10:01:56.324 280172 DEBUG nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m Nov 28 05:01:56 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:56.416 2 INFO neutron.agent.securitygroups_rpc [req-49cc58ff-e4e8-45be-b0c0-595b2c881c34 req-2439bbec-210c-4eb9-989c-4cbc137e5d8d 76caaf04f9e5427ca10e0bb020dbffa2 6fec370fed684ed6ba04de00336f61ee - - default default] Security group rule updated ['4f7b9341-d4bb-4bbc-a8bf-917ce0b68881']#033[00m Nov 28 05:01:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:01:56 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:56.959 2 INFO neutron.agent.securitygroups_rpc [req-d6a9207a-32bb-417d-a1a8-33a725f0d00f req-76509a4e-6eff-4420-ad31-f2903ff65806 76caaf04f9e5427ca10e0bb020dbffa2 6fec370fed684ed6ba04de00336f61ee - - default default] Security group rule updated ['4f7b9341-d4bb-4bbc-a8bf-917ce0b68881']#033[00m Nov 28 05:01:57 localhost nova_compute[280168]: 2025-11-28 10:01:57.307 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:57 localhost openstack_network_exporter[240973]: ERROR 10:01:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:01:57 localhost openstack_network_exporter[240973]: ERROR 10:01:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:01:57 localhost openstack_network_exporter[240973]: ERROR 10:01:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:01:57 localhost openstack_network_exporter[240973]: ERROR 10:01:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:01:57 localhost openstack_network_exporter[240973]: Nov 28 05:01:57 localhost openstack_network_exporter[240973]: ERROR 10:01:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:01:57 localhost openstack_network_exporter[240973]: Nov 28 05:01:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 225 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 241 op/s Nov 28 05:01:57 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:57.743 2 INFO neutron.agent.securitygroups_rpc [None req-e0232781-0774-46d7-9ff8-6308f0f3831b 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Security group member updated ['8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e']#033[00m Nov 28 05:01:58 localhost nova_compute[280168]: 2025-11-28 10:01:58.296 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:58 localhost podman[239012]: time="2025-11-28T10:01:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:01:58 localhost podman[239012]: @ - - [28/Nov/2025:10:01:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158154 "" "Go-http-client/1.1" Nov 28 05:01:58 localhost podman[239012]: @ - - [28/Nov/2025:10:01:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19674 "" "Go-http-client/1.1" Nov 28 05:01:59 localhost snmpd[68067]: empty variable list in _query Nov 28 05:01:59 localhost snmpd[68067]: empty variable list in _query Nov 28 05:01:59 localhost snmpd[68067]: empty variable list in _query Nov 28 05:01:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 225 MiB data, 916 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 241 op/s Nov 28 05:01:59 localhost nova_compute[280168]: 2025-11-28 10:01:59.698 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:01:59 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:01:59.807 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:01:59Z, description=, device_id=8d6dcd20-92ab-47ad-ac9d-52244fd1b9b4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a5e94566-6d46-488f-ab71-30296b099db4, ip_allocation=immediate, mac_address=fa:16:3e:3c:b9:19, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=810, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:01:59Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:01:59 localhost neutron_sriov_agent[254415]: 2025-11-28 10:01:59.862 2 INFO neutron.agent.securitygroups_rpc [None req-34f90ada-ae7e-4d6e-90c9-94029146836e 318114281cb649bc9eeed12ecdc7273f 310745a04bd441169ff77f55ccf6bd7b - - default default] Security group member updated ['8cd6a72f-0cb3-42f5-95bb-7d1b962c8a1e']#033[00m Nov 28 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:01:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e103 e103: 6 total, 6 up, 6 in Nov 28 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:01:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:02:00 localhost podman[309345]: 2025-11-28 10:01:59.999907694 +0000 UTC m=+0.099428301 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent) Nov 28 05:02:00 localhost podman[309351]: 2025-11-28 10:02:00.029325593 +0000 UTC m=+0.113188245 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:02:00 localhost podman[309345]: 2025-11-28 10:02:00.084487503 +0000 UTC m=+0.184008140 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Nov 28 05:02:00 localhost systemd[1]: tmp-crun.a6vCPt.mount: Deactivated successfully. Nov 28 05:02:00 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:02:00 localhost podman[309351]: 2025-11-28 10:02:00.114512742 +0000 UTC m=+0.198375434 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:02:00 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:02:00 localhost podman[309410]: 2025-11-28 10:02:00.136362803 +0000 UTC m=+0.069131993 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:02:00 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 6 addresses Nov 28 05:02:00 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:00 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:00 localhost podman[309344]: 2025-11-28 10:02:00.100147757 +0000 UTC m=+0.199635333 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:02:00 localhost podman[309344]: 2025-11-28 10:02:00.180536719 +0000 UTC m=+0.280024385 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller) Nov 28 05:02:00 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:02:00 localhost podman[309342]: 2025-11-28 10:02:00.251420205 +0000 UTC m=+0.352712874 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible) Nov 28 05:02:00 localhost podman[309342]: 2025-11-28 10:02:00.260581012 +0000 UTC m=+0.361873701 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 05:02:00 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:02:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:00.334 261346 INFO neutron.agent.dhcp.agent [None req-69db6ddf-6552-497f-ab95-7f82cc1729d9 - - - - - -] DHCP configuration for ports {'a5e94566-6d46-488f-ab71-30296b099db4'} is completed#033[00m Nov 28 05:02:00 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:00.871 2 INFO neutron.agent.securitygroups_rpc [None req-42bd8e77-bdc1-4bfe-abe6-7d585fdf99bb 75ac6a26227c40ba81e61e610018d23f 1f9b84b894e641c4bee3ebcd1409ad9f - - default default] Security group rule updated ['6deb8732-9203-448a-b0a5-cf6a0375d009']#033[00m Nov 28 05:02:00 localhost nova_compute[280168]: 2025-11-28 10:02:00.984 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.002 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5d3ce7f1fffa4bc48553188980fcc5a0f98928b083e87362f9a3663b98ca8926" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.112 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 954 Content-Type: application/json Date: Fri, 28 Nov 2025 10:02:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-868e1cbc-f490-4365-854d-da0a5b96df09 x-openstack-request-id: req-868e1cbc-f490-4365-854d-da0a5b96df09 _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.113 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavors": [{"id": "6c0b67a0-46d4-481b-87df-bc5abc74bfe1", "name": "m1.micro", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/6c0b67a0-46d4-481b-87df-bc5abc74bfe1"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/6c0b67a0-46d4-481b-87df-bc5abc74bfe1"}]}, {"id": "98f289d4-5c06-4ab5-9089-7b580870d676", "name": "m1.nano", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/98f289d4-5c06-4ab5-9089-7b580870d676"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/98f289d4-5c06-4ab5-9089-7b580870d676"}]}, {"id": "f3c44237-060e-4213-a926-aa7fdb4bf902", "name": "m1.small", "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/f3c44237-060e-4213-a926-aa7fdb4bf902"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/f3c44237-060e-4213-a926-aa7fdb4bf902"}]}]} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.113 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors?is_public=None used request id req-868e1cbc-f490-4365-854d-da0a5b96df09 request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.119 12 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET http://nova-internal.openstack.svc:8774/v2.1/flavors/98f289d4-5c06-4ab5-9089-7b580870d676 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}5d3ce7f1fffa4bc48553188980fcc5a0f98928b083e87362f9a3663b98ca8926" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:519 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.137 12 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 493 Content-Type: application/json Date: Fri, 28 Nov 2025 10:02:01 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-15cea7a1-779e-444e-86f9-112352ff567e x-openstack-request-id: req-15cea7a1-779e-444e-86f9-112352ff567e _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:550 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.137 12 DEBUG novaclient.v2.client [-] RESP BODY: {"flavor": {"id": "98f289d4-5c06-4ab5-9089-7b580870d676", "name": "m1.nano", "ram": 128, "disk": 1, "swap": "", "OS-FLV-EXT-DATA:ephemeral": 0, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "links": [{"rel": "self", "href": "http://nova-internal.openstack.svc:8774/v2.1/flavors/98f289d4-5c06-4ab5-9089-7b580870d676"}, {"rel": "bookmark", "href": "http://nova-internal.openstack.svc:8774/flavors/98f289d4-5c06-4ab5-9089-7b580870d676"}]}} _http_log_response /usr/lib/python3.9/site-packages/keystoneauth1/session.py:582 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.138 12 DEBUG novaclient.v2.client [-] GET call to compute for http://nova-internal.openstack.svc:8774/v2.1/flavors/98f289d4-5c06-4ab5-9089-7b580870d676 used request id req-15cea7a1-779e-444e-86f9-112352ff567e request /usr/lib/python3.9/site-packages/keystoneauth1/session.py:954 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.139 12 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7292509e-f294-4159-96e5-22d4712df2a0', 'name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000007', 'OS-EXT-SRV-ATTR:host': 'np0005538515.localdomain', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'user_id': '28578129c91d407a92af609ba8bac430', 'hostId': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.9/site-packages/ceilometer/compute/discovery.py:228 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.142 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.148 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.148 12 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.194 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.194 12 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 7292509e-f294-4159-96e5-22d4712df2a0: ceilometer.compute.pollsters.NoVolumeException Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.194 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.210 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.211 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b366c60a-2a6b-4155-930f-8db93d806276', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.195105', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48a3ad70-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.418562253, 'message_signature': '17a25bae4a4eb77ae60e0fd32e92f09d344e0688f3e21347717a58cea8cdf717'}, {'source': 'openstack', 'counter_name': 'disk.device.usage', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.195105', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48a3c936-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.418562253, 'message_signature': '8474cbfdad15815a7123c1777c9ca76fb89eee1eef07e27ae105e1cb8128d181'}]}, 'timestamp': '2025-11-28 10:02:01.212638', '_unique_id': '4a61b93a5d0447ce95a7d7c78408552c'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.224 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.230 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.266 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.268 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '4b06f872-6c1c-4c88-b9d6-d2a29597c59f', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.230321', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48ac5470-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'c5a5b547658c2ae100048ddcc45db4ff5d5180b52c771f10427ddedac8c2eca4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 0, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.230321', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48ac65dc-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'e7eb0a28ac27128d923a7dde225d23238500df32c5d8a9d2176f149e88fa96c9'}]}, 'timestamp': '2025-11-28 10:02:01.268534', '_unique_id': '3275019ec3bb4c94ae84f163eef73d2f'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.270 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.273 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.273 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.273 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9acb507b-5041-4611-89ca-4ee95dda1ef4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.273459', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48ad3480-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'd87d9711d2f9992f8054f2903f7e9b004cec538b3bf0da1c5f404002128ad6c4'}, {'source': 'openstack', 'counter_name': 'disk.device.write.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 0, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.273459', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48ad4182-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': '3570179c06c02c299b35b55687f99c2cde83698dcfb875bb35c91732362e27ae'}]}, 'timestamp': '2025-11-28 10:02:01.274776', '_unique_id': '7fd9637f2b5741df8b4e59e596a59e41'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.277 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.278 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.278 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.279 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'a8989de5-bf71-4bef-8d52-b0ae10dd236a', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.278524', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48adfc62-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'd664f5f7f01a9f722fc902d88deb099f999ddec7130e73a3a47ab72274b33c34'}, {'source': 'openstack', 'counter_name': 'disk.device.write.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 0, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.278524', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48ae2fb6-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': '4e43cea39727cbbb60314aa0cf3a58c2ec95f40d50bf6f060876809613a2a80e'}]}, 'timestamp': '2025-11-28 10:02:01.280377', '_unique_id': '75daea94745342569e2966000947b850'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.281 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.282 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.282 12 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.282 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/cpu volume: 7910000000 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '7dd79bf5-fef9-4d2c-b4d2-62fc5277d34b', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'cpu', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 7910000000, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'timestamp': '2025-11-28T10:02:01.282783', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'cpu_number': 1}, 'message_id': '48ae9e56-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.415387957, 'message_signature': '2f902eda5451d0402aff92f8c2ba456a7c6d33dc9ee6f870e22596d41cefa613'}]}, 'timestamp': '2025-11-28 10:02:01.283034', '_unique_id': 'de64cac123174a58a7224fc275ad64f2'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.283 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.285 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.285 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.286 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '9f81154f-2e21-4769-baf7-4ee83e453ab8', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.285462', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48af10de-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.418562253, 'message_signature': '27fd8939504e14c6892efbf778326bb10a6f12cdf72b567f48078ec890368fc7'}, {'source': 'openstack', 'counter_name': 'disk.device.capacity', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.285462', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48af279a-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.418562253, 'message_signature': 'ecea7c3e24b0552c3d53adee4d1036ad3a11ff3c1c589d0d0d76344b58963fbc'}]}, 'timestamp': '2025-11-28 10:02:01.286672', '_unique_id': 'e4f1229508704a10a50562545ab007f6'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.287 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.288 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.iops in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.288 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskIOPSPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.288 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.iops from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.289 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.289 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.289 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.290 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.290 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.290 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.291 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.291 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.291 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'e492ad16-b7c7-41a7-90be-edf45ad3d2e4', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 23775232, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.291383', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48afee32-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': '62deb3c488182642e7b5f23a602bae8a3b941b4995a8ab9a8ffa6bd653e9e0fa'}, {'source': 'openstack', 'counter_name': 'disk.device.read.bytes', 'counter_type': 'cumulative', 'counter_unit': 'B', 'counter_volume': 2048, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.291383', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48aff814-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'b3d5e360a8662c0deb7f3f4f69f4802500219677947de31b8bf7fb482660abaf'}]}, 'timestamp': '2025-11-28 10:02:01.291881', '_unique_id': 'bbcdcebe3d8447d0b9a920c6b788286e'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.292 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.294 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.294 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.latency in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.294 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for PerDeviceDiskLatencyPollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.295 12 ERROR ceilometer.polling.manager [-] Prevent pollster disk.device.latency from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.295 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.295 12 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:163 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.295 12 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [] Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.296 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.296 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.read.latency volume: 950997712 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.296 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.read.latency volume: 928318 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'ac6e0d5b-d488-40f8-9bbb-05aa3bb95aa6', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 950997712, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.296243', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48b0abec-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': '8be86b2f202474d0857f7dcc80d2303595b836b6545eec0524ef487e640f9f82'}, {'source': 'openstack', 'counter_name': 'disk.device.read.latency', 'counter_type': 'cumulative', 'counter_unit': 'ns', 'counter_volume': 928318, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.296243', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48b0b4d4-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'd7662cb5c233202d14f96399f46c4a9851d1036a57a73e593686a909c98e112f'}]}, 'timestamp': '2025-11-28 10:02:01.296708', '_unique_id': '9acf8d50cac343eb8ce0e2817d6bc9df'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.297 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.298 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.298 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.299 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': 'b4b6c9ab-af0b-4e58-9463-726aa3bef9b0', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 760, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.298600', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48b10ff6-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': '99c91569dde35541e782674a71448408aeef74a50b4bd10b17a452c4afe98e05'}, {'source': 'openstack', 'counter_name': 'disk.device.read.requests', 'counter_type': 'cumulative', 'counter_unit': 'request', 'counter_volume': 1, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.298600', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48b127b6-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.452005595, 'message_signature': 'e4efe1af1dd9ee6be3a8d5c6bcd4a5df44c90861c9516bdf23692148f7ad22ca'}]}, 'timestamp': '2025-11-28 10:02:01.299836', '_unique_id': '9f53cfda721c4f6ea3295554e0e22909'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.300 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.301 12 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.301 12 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.301 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.302 12 DEBUG ceilometer.compute.pollsters [-] 7292509e-f294-4159-96e5-22d4712df2a0/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.9/site-packages/ceilometer/compute/pollsters/__init__.py:108 Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging [-] Could not send notification to notifications. Payload={'message_id': '05270718-23e3-425f-80ee-8e34563de07c', 'publisher_id': 'ceilometer.polling', 'event_type': 'telemetry.polling', 'priority': 'SAMPLE', 'payload': {'samples': [{'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 1073741824, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-vda', 'timestamp': '2025-11-28T10:02:01.301819', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'vda'}, 'message_id': '48b1858a-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.418562253, 'message_signature': '07a5b880eadcead49e279c3062e88fdeb72e10d867ec80cdbfd124bede74bbb0'}, {'source': 'openstack', 'counter_name': 'disk.device.allocation', 'counter_type': 'gauge', 'counter_unit': 'B', 'counter_volume': 485376, 'user_id': '28578129c91d407a92af609ba8bac430', 'user_name': None, 'project_id': 'a30386ba68ee46f4a1bac43cf415f3a4', 'project_name': None, 'resource_id': '7292509e-f294-4159-96e5-22d4712df2a0-sda', 'timestamp': '2025-11-28T10:02:01.301819', 'resource_metadata': {'display_name': 'tempest-UnshelveToHostMultiNodesTest-server-650509197', 'name': 'instance-00000007', 'instance_id': '7292509e-f294-4159-96e5-22d4712df2a0', 'instance_type': 'm1.nano', 'host': '9ebb4b702ddc99b37414d36e54c2ee005e2ad03e5449dd0a560248b5', 'instance_host': 'np0005538515.localdomain', 'flavor': {'id': '98f289d4-5c06-4ab5-9089-7b580870d676', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'status': 'active', 'state': 'running', 'task_state': '', 'image': {'id': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d'}, 'image_ref': 'c045142b-5f2b-4f4d-80b7-ca5ee791067d', 'image_ref_url': None, 'architecture': 'x86_64', 'os_type': 'hvm', 'vcpus': 1, 'memory_mb': 128, 'disk_gb': 1, 'ephemeral_gb': 0, 'root_gb': 1, 'disk_name': 'sda'}, 'message_id': '48b18efe-cc41-11f0-bb26-fa163e93ca2d', 'monotonic_time': 11945.418562253, 'message_signature': '145019e893f1a5b50f537e8781f3833ebe0797f495eacf4f7bff4a17bf98096e'}]}, 'timestamp': '2025-11-28 10:02:01.302350', '_unique_id': 'a3b2ea48a6394687b2a97232ac4f60ba'}: kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 446, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging yield Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/utils/functional.py", line 312, in retry_over_time Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return fun(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 877, in _connection_factory Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self._connection = self._establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 812, in _establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging conn = self.transport.establish_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/transport/pyamqp.py", line 201, in establish_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging conn.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/connection.py", line 323, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.transport.connect() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 129, in connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self._connect(self.host, self.port, self.connect_timeout) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/amqp/transport.py", line 184, in _connect Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.sock.connect(sa) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging ConnectionRefusedError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging The above exception was the direct cause of the following exception: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging Traceback (most recent call last): Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/notify/messaging.py", line 78, in notify Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.transport._send_notification(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 134, in _send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self._driver.send_notification(target, ctxt, message, version, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 694, in send_notification Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return self._send(target, ctxt, message, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 653, in _send Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging with self._get_connection(rpc_common.PURPOSE_SEND, retry) as conn: Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 605, in _get_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return rpc_common.ConnectionContext(self._connection_pool, Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.connection = connection_pool.get(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 98, in get Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return self.create(retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/pool.py", line 135, in create Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return self.connection_cls(self.conf, self.url, purpose, retry=retry) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 826, in __init__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.ensure_connection() Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 957, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.connection.ensure_connection( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 381, in ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self._ensure_connection(*args, **kwargs) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 433, in _ensure_connection Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging return retry_over_time( Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib64/python3.9/contextlib.py", line 137, in __exit__ Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging self.gen.throw(typ, value, traceback) Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging File "/usr/lib/python3.9/site-packages/kombu/connection.py", line 450, in _reraise_as_library_errors Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging raise ConnectionError(str(exc)) from exc Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging kombu.exceptions.OperationalError: [Errno 111] Connection refused Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.303 12 ERROR oslo_messaging.notify.messaging Nov 28 05:02:01 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:02:01.305 12 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters Nov 28 05:02:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:01.501 2 INFO neutron.agent.securitygroups_rpc [None req-3458faa2-903e-46ff-96c1-5776090af93b 75ac6a26227c40ba81e61e610018d23f 1f9b84b894e641c4bee3ebcd1409ad9f - - default default] Security group rule updated ['6deb8732-9203-448a-b0a5-cf6a0375d009']#033[00m Nov 28 05:02:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 181 op/s Nov 28 05:02:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:02 localhost nova_compute[280168]: 2025-11-28 10:02:02.310 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:02:02 localhost podman[309453]: 2025-11-28 10:02:02.992132729 +0000 UTC m=+0.094782289 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:02:03 localhost podman[309453]: 2025-11-28 10:02:03.005171614 +0000 UTC m=+0.107821154 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:02:03 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:02:03 localhost nova_compute[280168]: 2025-11-28 10:02:03.300 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v133: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 149 op/s Nov 28 05:02:04 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:02:04 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:04 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:04 localhost podman[309490]: 2025-11-28 10:02:04.60185873 +0000 UTC m=+0.074429054 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:04 localhost nova_compute[280168]: 2025-11-28 10:02:04.822 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:02:05 Nov 28 05:02:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:02:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:02:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_data', 'images', 'manila_metadata', 'vms', 'volumes', 'backups', '.mgr'] Nov 28 05:02:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:02:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 149 op/s Nov 28 05:02:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:02:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:02:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:02:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006581845861250698 of space, bias 1.0, pg target 1.3163691722501396 quantized to 32 (current 32) Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8555772569444443 quantized to 32 (current 32) Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:02:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019465818676716918 quantized to 16 (current 16) Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:02:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:02:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:02:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:02:05 localhost podman[309525]: 2025-11-28 10:02:05.946488918 +0000 UTC m=+0.060162916 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:02:06 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:02:06 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:06 localhost podman[309538]: 2025-11-28 10:02:06.000897045 +0000 UTC m=+0.077453846 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:02:06 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:06 localhost podman[309525]: 2025-11-28 10:02:06.04152158 +0000 UTC m=+0.155195638 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:02:06 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:02:06 localhost nova_compute[280168]: 2025-11-28 10:02:06.336 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:06 localhost nova_compute[280168]: 2025-11-28 10:02:06.375 280172 DEBUG nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m Nov 28 05:02:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:07 localhost nova_compute[280168]: 2025-11-28 10:02:07.276 280172 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:02:07 localhost nova_compute[280168]: 2025-11-28 10:02:07.277 280172 INFO nova.compute.manager [-] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] VM Stopped (Lifecycle Event)#033[00m Nov 28 05:02:07 localhost nova_compute[280168]: 2025-11-28 10:02:07.313 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:07 localhost nova_compute[280168]: 2025-11-28 10:02:07.427 280172 DEBUG nova.compute.manager [None req-61914361-d006-49b8-b93c-36c57176e94a - - - - - -] [instance: c06e2ffc-a8af-41b6-ab88-680ef1f6fe50] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:02:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v135: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 630 KiB/s rd, 35 KiB/s wr, 53 op/s Nov 28 05:02:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:08.262 2 INFO neutron.agent.securitygroups_rpc [req-6bffedb9-405b-4a40-9982-68d686e88a5f req-5df2fd06-5333-4972-81c1-a0ccb5870973 75ac6a26227c40ba81e61e610018d23f 1f9b84b894e641c4bee3ebcd1409ad9f - - default default] Security group member updated ['6deb8732-9203-448a-b0a5-cf6a0375d009']#033[00m Nov 28 05:02:08 localhost nova_compute[280168]: 2025-11-28 10:02:08.274 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:08 localhost nova_compute[280168]: 2025-11-28 10:02:08.303 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:08 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Deactivated successfully. Nov 28 05:02:08 localhost systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000007.scope: Consumed 13.546s CPU time. Nov 28 05:02:08 localhost systemd-machined[201641]: Machine qemu-3-instance-00000007 terminated. Nov 28 05:02:08 localhost dnsmasq[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/addn_hosts - 0 addresses Nov 28 05:02:08 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/host Nov 28 05:02:08 localhost dnsmasq-dhcp[307007]: read /var/lib/neutron/dhcp/4feac402-945d-4d17-a15d-c8337ea9c266/opts Nov 28 05:02:08 localhost systemd[1]: tmp-crun.SGnri4.mount: Deactivated successfully. Nov 28 05:02:08 localhost podman[309582]: 2025-11-28 10:02:08.870841764 +0000 UTC m=+0.055320037 container kill 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.077 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:09 localhost kernel: device tapd0a70cfb-41 left promiscuous mode Nov 28 05:02:09 localhost ovn_controller[152726]: 2025-11-28T10:02:09Z|00071|binding|INFO|Releasing lport d0a70cfb-41f8-4ab9-819b-560a898e8329 from this chassis (sb_readonly=0) Nov 28 05:02:09 localhost ovn_controller[152726]: 2025-11-28T10:02:09Z|00072|binding|INFO|Setting lport d0a70cfb-41f8-4ab9-819b-560a898e8329 down in Southbound Nov 28 05:02:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:09.086 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-4feac402-945d-4d17-a15d-c8337ea9c266', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4feac402-945d-4d17-a15d-c8337ea9c266', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f1bee3918a2345388c202f74e60af9c5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4a3868fc-e35e-44db-9bd3-f12a417ed185, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d0a70cfb-41f8-4ab9-819b-560a898e8329) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:02:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:09.088 158530 INFO neutron.agent.ovn.metadata.agent [-] Port d0a70cfb-41f8-4ab9-819b-560a898e8329 in datapath 4feac402-945d-4d17-a15d-c8337ea9c266 unbound from our chassis#033[00m Nov 28 05:02:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:09.091 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4feac402-945d-4d17-a15d-c8337ea9c266, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:02:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:09.093 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[3215de84-0f82-42e1-9cc6-d53bdeb003a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.098 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.389 280172 INFO nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance shutdown successfully after 13 seconds.#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.395 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance destroyed successfully.#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.395 280172 DEBUG nova.objects.instance [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lazy-loading 'numa_topology' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.453 280172 INFO nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Beginning cold snapshot process#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.595 280172 DEBUG nova.virt.libvirt.imagebackend [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] No parent info for 85968a96-5a0e-43a4-9c04-3954f640a7ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.644 280172 DEBUG nova.storage.rbd_utils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] creating snapshot(450118c61e6a4bb095de1d74dd4c0177) on rbd image(7292509e-f294-4159-96e5-22d4712df2a0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Nov 28 05:02:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 630 KiB/s rd, 35 KiB/s wr, 53 op/s Nov 28 05:02:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e104 e104: 6 total, 6 up, 6 in Nov 28 05:02:09 localhost nova_compute[280168]: 2025-11-28 10:02:09.893 280172 DEBUG nova.storage.rbd_utils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] cloning vms/7292509e-f294-4159-96e5-22d4712df2a0_disk@450118c61e6a4bb095de1d74dd4c0177 to images/a2def208-be38-4da4-a3f2-d5c5045455ca clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m Nov 28 05:02:10 localhost nova_compute[280168]: 2025-11-28 10:02:10.084 280172 DEBUG nova.storage.rbd_utils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] flattening images/a2def208-be38-4da4-a3f2-d5c5045455ca flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m Nov 28 05:02:10 localhost nova_compute[280168]: 2025-11-28 10:02:10.720 280172 DEBUG nova.storage.rbd_utils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] removing snapshot(450118c61e6a4bb095de1d74dd4c0177) on rbd image(7292509e-f294-4159-96e5-22d4712df2a0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m Nov 28 05:02:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e105 e105: 6 total, 6 up, 6 in Nov 28 05:02:10 localhost nova_compute[280168]: 2025-11-28 10:02:10.956 280172 DEBUG nova.storage.rbd_utils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] creating snapshot(snap) on rbd image(a2def208-be38-4da4-a3f2-d5c5045455ca) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m Nov 28 05:02:11 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:11 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:11 localhost podman[309765]: 2025-11-28 10:02:11.570976688 +0000 UTC m=+0.060876406 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:02:11 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 6.7 MiB/s rd, 8.5 MiB/s wr, 217 op/s Nov 28 05:02:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:11 localhost nova_compute[280168]: 2025-11-28 10:02:11.879 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e106 e106: 6 total, 6 up, 6 in Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.315 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.688 280172 INFO nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Snapshot image upload complete#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.689 280172 DEBUG nova.compute.manager [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:02:12 localhost dnsmasq[307007]: exiting on receipt of SIGTERM Nov 28 05:02:12 localhost systemd[1]: libpod-421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a.scope: Deactivated successfully. Nov 28 05:02:12 localhost podman[309802]: 2025-11-28 10:02:12.718867249 +0000 UTC m=+0.060131994 container kill 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125) Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.744 280172 INFO nova.compute.manager [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Shelve offloading#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.751 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance destroyed successfully.#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.752 280172 DEBUG nova.compute.manager [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.754 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.754 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquired lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.754 280172 DEBUG nova.network.neutron [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m Nov 28 05:02:12 localhost podman[309814]: 2025-11-28 10:02:12.791713041 +0000 UTC m=+0.062420044 container died 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.811 280172 DEBUG nova.network.neutron [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m Nov 28 05:02:12 localhost podman[309814]: 2025-11-28 10:02:12.832876153 +0000 UTC m=+0.103583116 container cleanup 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:12 localhost systemd[1]: libpod-conmon-421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a.scope: Deactivated successfully. Nov 28 05:02:12 localhost podman[309821]: 2025-11-28 10:02:12.871411614 +0000 UTC m=+0.131644495 container remove 421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4feac402-945d-4d17-a15d-c8337ea9c266, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:02:12 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:12.903 261346 INFO neutron.agent.dhcp.agent [None req-fa2996bb-4129-4fad-bd6f-99d7b74572f3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.981 280172 DEBUG nova.network.neutron [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m Nov 28 05:02:12 localhost nova_compute[280168]: 2025-11-28 10:02:12.998 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Releasing lock "refresh_cache-7292509e-f294-4159-96e5-22d4712df2a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.007 280172 INFO nova.virt.libvirt.driver [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Instance destroyed successfully.#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.008 280172 DEBUG nova.objects.instance [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lazy-loading 'resources' on Instance uuid 7292509e-f294-4159-96e5-22d4712df2a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m Nov 28 05:02:13 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:13.234 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.308 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:02:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3925667673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:02:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:02:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3925667673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:02:13 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:02:13 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:13 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:13 localhost podman[309879]: 2025-11-28 10:02:13.530294528 +0000 UTC m=+0.073594787 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.583 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.646 280172 INFO nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Deleting instance files /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0_del#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.647 280172 INFO nova.virt.libvirt.driver [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Deletion of /var/lib/nova/instances/7292509e-f294-4159-96e5-22d4712df2a0_del complete#033[00m Nov 28 05:02:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 200 op/s Nov 28 05:02:13 localhost systemd[1]: var-lib-containers-storage-overlay-a53b8d3f668b25246333f5de4a531a6e13da55d713122016f2fd29b9d52ffaf2-merged.mount: Deactivated successfully. Nov 28 05:02:13 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-421f676002d9dc6198ae0142c65c01f174b39b07450c22c4b59e8e8bd991f65a-userdata-shm.mount: Deactivated successfully. Nov 28 05:02:13 localhost systemd[1]: run-netns-qdhcp\x2d4feac402\x2d945d\x2d4d17\x2da15d\x2dc8337ea9c266.mount: Deactivated successfully. Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.742 280172 INFO nova.scheduler.client.report [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Deleted allocations for instance 7292509e-f294-4159-96e5-22d4712df2a0#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.788 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.789 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:02:13 localhost nova_compute[280168]: 2025-11-28 10:02:13.814 280172 DEBUG oslo_concurrency.processutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:02:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:02:14 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/915675261' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:02:14 localhost nova_compute[280168]: 2025-11-28 10:02:14.273 280172 DEBUG oslo_concurrency.processutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:02:14 localhost nova_compute[280168]: 2025-11-28 10:02:14.280 280172 DEBUG nova.compute.provider_tree [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:02:14 localhost nova_compute[280168]: 2025-11-28 10:02:14.296 280172 DEBUG nova.scheduler.client.report [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:02:14 localhost nova_compute[280168]: 2025-11-28 10:02:14.318 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:02:14 localhost nova_compute[280168]: 2025-11-28 10:02:14.389 280172 DEBUG oslo_concurrency.lockutils [None req-e7d006c1-4e1e-4995-bd1a-1f6d1f26ac1e 28578129c91d407a92af609ba8bac430 a30386ba68ee46f4a1bac43cf415f3a4 - - default default] Lock "7292509e-f294-4159-96e5-22d4712df2a0" "released" by "nova.compute.manager.ComputeManager.shelve_instance..do_shelve_instance" :: held 18.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:02:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e107 e107: 6 total, 6 up, 6 in Nov 28 05:02:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 8.0 MiB/s rd, 12 MiB/s wr, 204 op/s Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0. Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.032192) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324136032250, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 964, "num_deletes": 254, "total_data_size": 1034210, "memory_usage": 1051536, "flush_reason": "Manual Compaction"} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324136040182, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 674308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17448, "largest_seqno": 18407, "table_properties": {"data_size": 670184, "index_size": 1787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10053, "raw_average_key_size": 20, "raw_value_size": 661649, "raw_average_value_size": 1358, "num_data_blocks": 78, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324089, "oldest_key_time": 1764324089, "file_creation_time": 1764324136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 8037 microseconds, and 2808 cpu microseconds. Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.040228) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 674308 bytes OK Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.040251) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.042372) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.042397) EVENT_LOG_v1 {"time_micros": 1764324136042391, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.042417) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1029284, prev total WAL file size 1029284, number of live WAL files 2. Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.043003) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end) Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(658KB)], [24(17MB)] Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324136043098, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 18755201, "oldest_snapshot_seqno": -1} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12046 keys, 16120641 bytes, temperature: kUnknown Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324136158331, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 16120641, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16054296, "index_size": 35140, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 323941, "raw_average_key_size": 26, "raw_value_size": 15851460, "raw_average_value_size": 1315, "num_data_blocks": 1327, "num_entries": 12046, "num_filter_entries": 12046, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.158558) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 16120641 bytes Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.160578) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.7 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 17.2 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(51.7) write-amplify(23.9) OK, records in: 12571, records dropped: 525 output_compression: NoCompression Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.160596) EVENT_LOG_v1 {"time_micros": 1764324136160588, "job": 12, "event": "compaction_finished", "compaction_time_micros": 115292, "compaction_time_cpu_micros": 47119, "output_level": 6, "num_output_files": 1, "total_output_size": 16120641, "num_input_records": 12571, "num_output_records": 12046, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324136160790, "job": 12, "event": "table_file_deletion", "file_number": 26} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324136162486, "job": 12, "event": "table_file_deletion", "file_number": 24} Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.042880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.162603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.162612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.162615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.162617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:02:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:02:16.162620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:02:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:17 localhost nova_compute[280168]: 2025-11-28 10:02:17.317 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:02:17 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:02:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:02:17 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:02:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:02:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 273 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 10 MiB/s rd, 10 MiB/s wr, 387 op/s Nov 28 05:02:17 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 25632c53-ed7a-489d-a0d0-638d4bdafaff (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:02:17 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 25632c53-ed7a-489d-a0d0-638d4bdafaff (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:02:17 localhost ceph-mgr[286188]: [progress INFO root] Completed event 25632c53-ed7a-489d-a0d0-638d4bdafaff (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:02:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:02:17 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:02:18 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:02:18 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:02:18 localhost nova_compute[280168]: 2025-11-28 10:02:18.311 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e108 e108: 6 total, 6 up, 6 in Nov 28 05:02:19 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:19.518 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:02:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v146: 177 pgs: 177 active+clean; 273 MiB data, 941 MiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 25 KiB/s wr, 182 op/s Nov 28 05:02:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e109 e109: 6 total, 6 up, 6 in Nov 28 05:02:20 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:02:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:02:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:02:21 localhost podman[310009]: 2025-11-28 10:02:21.065940832 +0000 UTC m=+0.099528442 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, version=9.6, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc.) Nov 28 05:02:21 localhost podman[310009]: 2025-11-28 10:02:21.076139034 +0000 UTC m=+0.109726564 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public) Nov 28 05:02:21 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:02:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e110 e110: 6 total, 6 up, 6 in Nov 28 05:02:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:02:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 273 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 13 MiB/s rd, 7.8 MiB/s wr, 572 op/s Nov 28 05:02:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e111 e111: 6 total, 6 up, 6 in Nov 28 05:02:22 localhost nova_compute[280168]: 2025-11-28 10:02:22.320 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e112 e112: 6 total, 6 up, 6 in Nov 28 05:02:23 localhost nova_compute[280168]: 2025-11-28 10:02:23.350 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 273 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 13 MiB/s rd, 12 MiB/s wr, 504 op/s Nov 28 05:02:23 localhost nova_compute[280168]: 2025-11-28 10:02:23.946 280172 DEBUG nova.virt.driver [-] Emitting event Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m Nov 28 05:02:23 localhost nova_compute[280168]: 2025-11-28 10:02:23.946 280172 INFO nova.compute.manager [-] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] VM Stopped (Lifecycle Event)#033[00m Nov 28 05:02:24 localhost nova_compute[280168]: 2025-11-28 10:02:24.213 280172 DEBUG nova.compute.manager [None req-774cb919-d254-431b-b0b7-1e90bb499929 - - - - - -] [instance: 7292509e-f294-4159-96e5-22d4712df2a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m Nov 28 05:02:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e113 e113: 6 total, 6 up, 6 in Nov 28 05:02:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e114 e114: 6 total, 6 up, 6 in Nov 28 05:02:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 273 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail Nov 28 05:02:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:25.940 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:25Z, description=, device_id=856a9e5d-377c-485a-8ffd-a38bf58c9fa5, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=078d853f-feea-4033-aef7-9f3673e9288f, ip_allocation=immediate, mac_address=fa:16:3e:a7:b4:2f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=975, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:25Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:26 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:26 localhost podman[310049]: 2025-11-28 10:02:26.164831024 +0000 UTC m=+0.052569012 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:02:26 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:26 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e115 e115: 6 total, 6 up, 6 in Nov 28 05:02:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:26.469 261346 INFO neutron.agent.dhcp.agent [None req-57d1c120-e546-45fb-a4f3-e968f2b5a166 - - - - - -] DHCP configuration for ports {'078d853f-feea-4033-aef7-9f3673e9288f'} is completed#033[00m Nov 28 05:02:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:27 localhost nova_compute[280168]: 2025-11-28 10:02:27.058 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e116 e116: 6 total, 6 up, 6 in Nov 28 05:02:27 localhost nova_compute[280168]: 2025-11-28 10:02:27.322 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:27 localhost openstack_network_exporter[240973]: ERROR 10:02:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:02:27 localhost openstack_network_exporter[240973]: ERROR 10:02:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:02:27 localhost openstack_network_exporter[240973]: ERROR 10:02:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:02:27 localhost openstack_network_exporter[240973]: ERROR 10:02:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:02:27 localhost openstack_network_exporter[240973]: Nov 28 05:02:27 localhost openstack_network_exporter[240973]: ERROR 10:02:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:02:27 localhost openstack_network_exporter[240973]: Nov 28 05:02:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 225 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 555 op/s Nov 28 05:02:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e117 e117: 6 total, 6 up, 6 in Nov 28 05:02:28 localhost nova_compute[280168]: 2025-11-28 10:02:28.354 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:28 localhost podman[239012]: time="2025-11-28T10:02:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:02:28 localhost podman[239012]: @ - - [28/Nov/2025:10:02:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:02:28 localhost podman[239012]: @ - - [28/Nov/2025:10:02:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19205 "" "Go-http-client/1.1" Nov 28 05:02:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e118 e118: 6 total, 6 up, 6 in Nov 28 05:02:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 225 MiB data, 886 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 555 op/s Nov 28 05:02:29 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:29.868 2 INFO neutron.agent.securitygroups_rpc [None req-163713b6-af4d-4d16-9097-b3cd54a25f68 078cec78b66d44acb2dcf304e572f2cf dba68040958c4e4c89f84cd27a771cd2 - - default default] Security group member updated ['a0bf5ab5-c355-48ac-a40e-9473d4858766']#033[00m Nov 28 05:02:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e119 e119: 6 total, 6 up, 6 in Nov 28 05:02:30 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:30.388 2 INFO neutron.agent.securitygroups_rpc [None req-59eaff10-1680-4aeb-97dc-49cab4063acc 078cec78b66d44acb2dcf304e572f2cf dba68040958c4e4c89f84cd27a771cd2 - - default default] Security group member updated ['a0bf5ab5-c355-48ac-a40e-9473d4858766']#033[00m Nov 28 05:02:30 localhost nova_compute[280168]: 2025-11-28 10:02:30.422 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:02:30 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:02:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e120 e120: 6 total, 6 up, 6 in Nov 28 05:02:30 localhost podman[310071]: 2025-11-28 10:02:30.997331851 +0000 UTC m=+0.091681761 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent) Nov 28 05:02:31 localhost podman[310071]: 2025-11-28 10:02:31.006741819 +0000 UTC m=+0.101091669 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:31 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:02:31 localhost podman[310072]: 2025-11-28 10:02:31.007919285 +0000 UTC m=+0.096366505 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:02:31 localhost podman[310072]: 2025-11-28 10:02:31.087937467 +0000 UTC m=+0.176384667 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:02:31 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:02:31 localhost podman[310069]: 2025-11-28 10:02:31.150923998 +0000 UTC m=+0.250927092 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2) Nov 28 05:02:31 localhost podman[310070]: 2025-11-28 10:02:31.203750817 +0000 UTC m=+0.302372398 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible) Nov 28 05:02:31 localhost podman[310069]: 2025-11-28 10:02:31.216415405 +0000 UTC m=+0.316418519 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 05:02:31 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:02:31 localhost podman[310070]: 2025-11-28 10:02:31.277690973 +0000 UTC m=+0.376312514 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:02:31 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:02:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 225 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 203 KiB/s rd, 51 KiB/s wr, 281 op/s Nov 28 05:02:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:32 localhost nova_compute[280168]: 2025-11-28 10:02:32.324 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e121 e121: 6 total, 6 up, 6 in Nov 28 05:02:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:33.076 261346 INFO neutron.agent.linux.ip_lib [None req-94478697-69e1-416a-905a-44405c2bc0e6 - - - - - -] Device tap51f612f0-6f cannot be used as it has no MAC address#033[00m Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.097 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost kernel: device tap51f612f0-6f entered promiscuous mode Nov 28 05:02:33 localhost NetworkManager[5965]: [1764324153.1064] manager: (tap51f612f0-6f): new Generic device (/org/freedesktop/NetworkManager/Devices/21) Nov 28 05:02:33 localhost ovn_controller[152726]: 2025-11-28T10:02:33Z|00073|binding|INFO|Claiming lport 51f612f0-6f47-40c4-b14b-9819c35d81b4 for this chassis. Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.107 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost ovn_controller[152726]: 2025-11-28T10:02:33Z|00074|binding|INFO|51f612f0-6f47-40c4-b14b-9819c35d81b4: Claiming unknown Nov 28 05:02:33 localhost systemd-udevd[310160]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:02:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:33.123 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-02080985-b864-4a8c-99f6-15cd1e3b9bee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02080985-b864-4a8c-99f6-15cd1e3b9bee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7913f694a0c456794b7dd9ed628cb12', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1193d1c9-a667-4e4b-958b-68ef2ed8f8fd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=51f612f0-6f47-40c4-b14b-9819c35d81b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:02:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:33.125 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 51f612f0-6f47-40c4-b14b-9819c35d81b4 in datapath 02080985-b864-4a8c-99f6-15cd1e3b9bee bound to our chassis#033[00m Nov 28 05:02:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:33.127 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 02080985-b864-4a8c-99f6-15cd1e3b9bee or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:02:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:33.128 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[7b473e3d-acb2-4ccb-a3cf-1aefab7c7810]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.141 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost ovn_controller[152726]: 2025-11-28T10:02:33Z|00075|binding|INFO|Setting lport 51f612f0-6f47-40c4-b14b-9819c35d81b4 ovn-installed in OVS Nov 28 05:02:33 localhost ovn_controller[152726]: 2025-11-28T10:02:33Z|00076|binding|INFO|Setting lport 51f612f0-6f47-40c4-b14b-9819c35d81b4 up in Southbound Nov 28 05:02:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.147 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost journal[228057]: ethtool ioctl error on tap51f612f0-6f: No such device Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.213 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.233 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost systemd[1]: tmp-crun.NAxtzl.mount: Deactivated successfully. Nov 28 05:02:33 localhost podman[310168]: 2025-11-28 10:02:33.264535596 +0000 UTC m=+0.110180617 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:02:33 localhost podman[310168]: 2025-11-28 10:02:33.274533343 +0000 UTC m=+0.120178344 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:02:33 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:02:33 localhost nova_compute[280168]: 2025-11-28 10:02:33.356 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 225 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 188 KiB/s rd, 48 KiB/s wr, 260 op/s Nov 28 05:02:33 localhost podman[310254]: Nov 28 05:02:33 localhost podman[310254]: 2025-11-28 10:02:33.988126304 +0000 UTC m=+0.082279183 container create 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:02:34 localhost systemd[1]: Started libpod-conmon-2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f.scope. Nov 28 05:02:34 localhost podman[310254]: 2025-11-28 10:02:33.949968814 +0000 UTC m=+0.044121713 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:02:34 localhost systemd[1]: Started libcrun container. Nov 28 05:02:34 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cb36eecf92e193033a7d68ca6fa47ce64881bf085c7d0c1c8fd7557fceaaf3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:02:34 localhost podman[310254]: 2025-11-28 10:02:34.069391225 +0000 UTC m=+0.163544094 container init 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2) Nov 28 05:02:34 localhost systemd[1]: tmp-crun.HPbvqN.mount: Deactivated successfully. Nov 28 05:02:34 localhost podman[310254]: 2025-11-28 10:02:34.083363763 +0000 UTC m=+0.177516632 container start 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Nov 28 05:02:34 localhost dnsmasq[310272]: started, version 2.85 cachesize 150 Nov 28 05:02:34 localhost dnsmasq[310272]: DNS service limited to local subnets Nov 28 05:02:34 localhost dnsmasq[310272]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:02:34 localhost dnsmasq[310272]: warning: no upstream servers configured Nov 28 05:02:34 localhost dnsmasq-dhcp[310272]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:02:34 localhost dnsmasq[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/addn_hosts - 0 addresses Nov 28 05:02:34 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/host Nov 28 05:02:34 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/opts Nov 28 05:02:34 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:34.273 261346 INFO neutron.agent.dhcp.agent [None req-7a7c43c6-3509-47dd-8b59-79886fef072a - - - - - -] DHCP configuration for ports {'b8b73ac7-0389-4b76-8dd8-3615c03348fe'} is completed#033[00m Nov 28 05:02:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e122 e122: 6 total, 6 up, 6 in Nov 28 05:02:34 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:34.966 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:34Z, description=, device_id=b5ba2814-344b-427b-b30c-b10dca1fc3b1, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=962ab66e-6f28-460c-ba48-6f8d97c72fc1, ip_allocation=immediate, mac_address=fa:16:3e:be:72:98, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1059, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:34Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:35 localhost podman[310291]: 2025-11-28 10:02:35.203843442 +0000 UTC m=+0.067615393 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2) Nov 28 05:02:35 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:02:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:35.378 261346 INFO neutron.agent.dhcp.agent [None req-14d8cd81-cc41-491e-96af-db4c8f5fc3c7 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:34Z, description=, device_id=bc18d52b-79ab-4649-9d2c-e95822b972e6, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4943dbb9-4fdb-4880-be61-1585f95a0a04, ip_allocation=immediate, mac_address=fa:16:3e:a9:f5:f2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1060, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:34Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:35.439 261346 INFO neutron.agent.dhcp.agent [None req-958bedac-91a2-41d2-a0d3-50c5cc8addae - - - - - -] DHCP configuration for ports {'962ab66e-6f28-460c-ba48-6f8d97c72fc1'} is completed#033[00m Nov 28 05:02:35 localhost nova_compute[280168]: 2025-11-28 10:02:35.551 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:35 localhost podman[310329]: 2025-11-28 10:02:35.616189611 +0000 UTC m=+0.062316632 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:02:35 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:02:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:02:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:02:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v168: 177 pgs: 177 active+clean; 225 MiB data, 891 MiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 36 KiB/s wr, 195 op/s Nov 28 05:02:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:02:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:02:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:02:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:02:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:35.915 261346 INFO neutron.agent.dhcp.agent [None req-c09ef46b-67f3-466a-a234-9e4f86de8f36 - - - - - -] DHCP configuration for ports {'4943dbb9-4fdb-4880-be61-1585f95a0a04'} is completed#033[00m Nov 28 05:02:35 localhost podman[310367]: 2025-11-28 10:02:35.997430835 +0000 UTC m=+0.060494406 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 05:02:35 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:02:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:35 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:36 localhost nova_compute[280168]: 2025-11-28 10:02:36.360 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:02:36 localhost podman[310388]: 2025-11-28 10:02:36.968255668 +0000 UTC m=+0.072971197 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:02:36 localhost podman[310388]: 2025-11-28 10:02:36.984325871 +0000 UTC m=+0.089041320 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:02:36 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:02:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:37.051 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:36Z, description=, device_id=bc18d52b-79ab-4649-9d2c-e95822b972e6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a5d73d1f-506a-456b-84b3-47f3eca586f5, ip_allocation=immediate, mac_address=fa:16:3e:c0:bf:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:02:31Z, description=, dns_domain=, id=02080985-b864-4a8c-99f6-15cd1e3b9bee, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-806689888-network, port_security_enabled=True, project_id=a7913f694a0c456794b7dd9ed628cb12, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58375, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1036, status=ACTIVE, subnets=['a7068720-9432-437a-b41d-71d3341bbf2b'], tags=[], tenant_id=a7913f694a0c456794b7dd9ed628cb12, updated_at=2025-11-28T10:02:32Z, vlan_transparent=None, network_id=02080985-b864-4a8c-99f6-15cd1e3b9bee, port_security_enabled=False, project_id=a7913f694a0c456794b7dd9ed628cb12, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1064, status=DOWN, tags=[], tenant_id=a7913f694a0c456794b7dd9ed628cb12, updated_at=2025-11-28T10:02:36Z on network 02080985-b864-4a8c-99f6-15cd1e3b9bee#033[00m Nov 28 05:02:37 localhost nova_compute[280168]: 2025-11-28 10:02:37.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:37.265 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:36Z, description=, device_id=bebca4d6-d3c9-48a5-be17-ed38f336aa97, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c72e44f2-e17b-4cb9-b759-8ca328e1cca9, ip_allocation=immediate, mac_address=fa:16:3e:55:6d:55, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1065, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:36Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:37 localhost dnsmasq[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/addn_hosts - 1 addresses Nov 28 05:02:37 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/host Nov 28 05:02:37 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/opts Nov 28 05:02:37 localhost podman[310422]: 2025-11-28 10:02:37.279644351 +0000 UTC m=+0.059207065 container kill 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 05:02:37 localhost systemd[1]: tmp-crun.JNnQIF.mount: Deactivated successfully. Nov 28 05:02:37 localhost nova_compute[280168]: 2025-11-28 10:02:37.327 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:37.475 261346 INFO neutron.agent.dhcp.agent [None req-04bbd525-e3f1-4019-9d68-71e5b77be9e2 - - - - - -] DHCP configuration for ports {'a5d73d1f-506a-456b-84b3-47f3eca586f5'} is completed#033[00m Nov 28 05:02:37 localhost podman[310460]: 2025-11-28 10:02:37.524440225 +0000 UTC m=+0.063584510 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Nov 28 05:02:37 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:02:37 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:37 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 138 KiB/s rd, 32 KiB/s wr, 191 op/s Nov 28 05:02:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:37.719 261346 INFO neutron.agent.dhcp.agent [None req-f526a273-7a95-43fd-8624-cfbb0f92b601 - - - - - -] DHCP configuration for ports {'c72e44f2-e17b-4cb9-b759-8ca328e1cca9'} is completed#033[00m Nov 28 05:02:38 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:38.232 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:36Z, description=, device_id=bc18d52b-79ab-4649-9d2c-e95822b972e6, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a5d73d1f-506a-456b-84b3-47f3eca586f5, ip_allocation=immediate, mac_address=fa:16:3e:c0:bf:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:02:31Z, description=, dns_domain=, id=02080985-b864-4a8c-99f6-15cd1e3b9bee, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestFqdnHostnames-806689888-network, port_security_enabled=True, project_id=a7913f694a0c456794b7dd9ed628cb12, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58375, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1036, status=ACTIVE, subnets=['a7068720-9432-437a-b41d-71d3341bbf2b'], tags=[], tenant_id=a7913f694a0c456794b7dd9ed628cb12, updated_at=2025-11-28T10:02:32Z, vlan_transparent=None, network_id=02080985-b864-4a8c-99f6-15cd1e3b9bee, port_security_enabled=False, project_id=a7913f694a0c456794b7dd9ed628cb12, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1064, status=DOWN, tags=[], tenant_id=a7913f694a0c456794b7dd9ed628cb12, updated_at=2025-11-28T10:02:36Z on network 02080985-b864-4a8c-99f6-15cd1e3b9bee#033[00m Nov 28 05:02:38 localhost nova_compute[280168]: 2025-11-28 10:02:38.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:38 localhost nova_compute[280168]: 2025-11-28 10:02:38.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:02:38 localhost nova_compute[280168]: 2025-11-28 10:02:38.359 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:38 localhost dnsmasq[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/addn_hosts - 1 addresses Nov 28 05:02:38 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/host Nov 28 05:02:38 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/opts Nov 28 05:02:38 localhost podman[310497]: 2025-11-28 10:02:38.446159994 +0000 UTC m=+0.046551218 container kill 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2) Nov 28 05:02:38 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:38.669 261346 INFO neutron.agent.dhcp.agent [None req-646c4b4c-f7bf-470f-aa91-4c5339d4c197 - - - - - -] DHCP configuration for ports {'a5d73d1f-506a-456b-84b3-47f3eca586f5'} is completed#033[00m Nov 28 05:02:39 localhost nova_compute[280168]: 2025-11-28 10:02:39.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 1023 B/s wr, 20 op/s Nov 28 05:02:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e123 e123: 6 total, 6 up, 6 in Nov 28 05:02:39 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:39.946 2 INFO neutron.agent.securitygroups_rpc [None req-c410e527-579f-4d7d-bb14-04bb4c79dd9f b97430f38d544448bcb1f84d60affd50 f23b7feb8db740db9eea6302444ed3a8 - - default default] Security group member updated ['84bc6ad8-56a1-4678-950f-738b55ff6708']#033[00m Nov 28 05:02:40 localhost nova_compute[280168]: 2025-11-28 10:02:40.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:40 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:02:40 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:40 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:40 localhost podman[310535]: 2025-11-28 10:02:40.371340027 +0000 UTC m=+0.059057321 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:02:41 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:41.325 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:40Z, description=, device_id=459f84b2-a937-4822-a081-b6da2fd06fcc, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=469c6d02-2f81-4843-aa65-1cc5bd3e1c08, ip_allocation=immediate, mac_address=fa:16:3e:d2:2a:94, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1079, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:41Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:41 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:02:41 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:41 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:41 localhost podman[310572]: 2025-11-28 10:02:41.519649172 +0000 UTC m=+0.041193574 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v172: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 6.5 KiB/s wr, 20 op/s Nov 28 05:02:41 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:41.739 261346 INFO neutron.agent.dhcp.agent [None req-6c39faa6-5988-436a-ac10-23a551979913 - - - - - -] DHCP configuration for ports {'469c6d02-2f81-4843-aa65-1cc5bd3e1c08'} is completed#033[00m Nov 28 05:02:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:42 localhost nova_compute[280168]: 2025-11-28 10:02:42.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:42 localhost nova_compute[280168]: 2025-11-28 10:02:42.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:02:42 localhost nova_compute[280168]: 2025-11-28 10:02:42.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:02:42 localhost nova_compute[280168]: 2025-11-28 10:02:42.256 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:02:42 localhost nova_compute[280168]: 2025-11-28 10:02:42.330 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:42 localhost dnsmasq[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/addn_hosts - 0 addresses Nov 28 05:02:42 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/host Nov 28 05:02:42 localhost podman[310611]: 2025-11-28 10:02:42.881890663 +0000 UTC m=+0.069131349 container kill 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 28 05:02:42 localhost dnsmasq-dhcp[310272]: read /var/lib/neutron/dhcp/02080985-b864-4a8c-99f6-15cd1e3b9bee/opts Nov 28 05:02:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e124 e124: 6 total, 6 up, 6 in Nov 28 05:02:43 localhost nova_compute[280168]: 2025-11-28 10:02:43.094 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:43 localhost ovn_controller[152726]: 2025-11-28T10:02:43Z|00077|binding|INFO|Releasing lport 51f612f0-6f47-40c4-b14b-9819c35d81b4 from this chassis (sb_readonly=0) Nov 28 05:02:43 localhost ovn_controller[152726]: 2025-11-28T10:02:43Z|00078|binding|INFO|Setting lport 51f612f0-6f47-40c4-b14b-9819c35d81b4 down in Southbound Nov 28 05:02:43 localhost kernel: device tap51f612f0-6f left promiscuous mode Nov 28 05:02:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:43.102 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-02080985-b864-4a8c-99f6-15cd1e3b9bee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-02080985-b864-4a8c-99f6-15cd1e3b9bee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a7913f694a0c456794b7dd9ed628cb12', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1193d1c9-a667-4e4b-958b-68ef2ed8f8fd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=51f612f0-6f47-40c4-b14b-9819c35d81b4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:02:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:43.103 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 51f612f0-6f47-40c4-b14b-9819c35d81b4 in datapath 02080985-b864-4a8c-99f6-15cd1e3b9bee unbound from our chassis#033[00m Nov 28 05:02:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:43.105 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 02080985-b864-4a8c-99f6-15cd1e3b9bee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:02:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:43.106 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[4751d0ac-3dbe-453a-b0a9-1df86d35f5ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:02:43 localhost nova_compute[280168]: 2025-11-28 10:02:43.116 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:43 localhost nova_compute[280168]: 2025-11-28 10:02:43.233 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:43 localhost nova_compute[280168]: 2025-11-28 10:02:43.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:43 localhost nova_compute[280168]: 2025-11-28 10:02:43.362 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v174: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 6.5 KiB/s wr, 20 op/s Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.072 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:44 localhost kernel: device tap8af1236c-20 left promiscuous mode Nov 28 05:02:44 localhost ovn_controller[152726]: 2025-11-28T10:02:44Z|00079|binding|INFO|Releasing lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf from this chassis (sb_readonly=0) Nov 28 05:02:44 localhost ovn_controller[152726]: 2025-11-28T10:02:44Z|00080|binding|INFO|Setting lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf down in Southbound Nov 28 05:02:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:44.084 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-887157f9-a765-40c0-8be5-1fba3ddea8f8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-887157f9-a765-40c0-8be5-1fba3ddea8f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dda653c53224db086060962b0702694', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5520a81-bbe1-4feb-9859-6165eafc855d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=8af1236c-205e-4af9-a882-ccde7f9d3ecf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:02:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:44.087 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8af1236c-205e-4af9-a882-ccde7f9d3ecf in datapath 887157f9-a765-40c0-8be5-1fba3ddea8f8 unbound from our chassis#033[00m Nov 28 05:02:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:44.090 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 887157f9-a765-40c0-8be5-1fba3ddea8f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:02:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:44.091 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a8764c-2854-4d09-b4bb-fa2e06cdb968]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.103 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.233 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.258 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.260 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.261 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.559 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:44.560 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:02:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:44.561 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:02:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:02:44 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2317602557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:02:44 localhost nova_compute[280168]: 2025-11-28 10:02:44.782 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.025 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.027 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11626MB free_disk=41.700096130371094GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.028 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.028 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.105 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.106 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.150 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.271 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:44Z, description=, device_id=284309e9-e4cc-4725-a39f-0b5b94312aa1, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a523cd52-834b-474e-925b-8a0b6c6f8679, ip_allocation=immediate, mac_address=fa:16:3e:dd:0e:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1095, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:44Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:45 localhost dnsmasq[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 6 addresses Nov 28 05:02:45 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:45 localhost podman[310697]: 2025-11-28 10:02:45.500508209 +0000 UTC m=+0.069238593 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:02:45 localhost dnsmasq-dhcp[261709]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent [None req-8508e813-9116-48df-98bd-6595903dd5ab - - - - - -] Unable to reload_allocations dhcp for 887157f9-a765-40c0-8be5-1fba3ddea8f8.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap8af1236c-20 not found in namespace qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8. Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent return fut.result() Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent return self.__get_result() Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent raise self._exception Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap8af1236c-20 not found in namespace qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8. Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.524 261346 ERROR neutron.agent.dhcp.agent #033[00m Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.529 261346 INFO neutron.agent.dhcp.agent [-] Synchronizing state#033[00m Nov 28 05:02:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:02:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3168695736' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.592 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.599 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.617 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.633 261346 INFO neutron.agent.dhcp.agent [None req-21e967cf-1575-474e-9a24-c502ec0a76a5 - - - - - -] DHCP configuration for ports {'a523cd52-834b-474e-925b-8a0b6c6f8679'} is completed#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.649 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:02:45 localhost nova_compute[280168]: 2025-11-28 10:02:45.650 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:02:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 5.5 KiB/s wr, 0 op/s Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.798 261346 INFO neutron.agent.dhcp.agent [None req-694fd91a-974f-4128-82f1-92e341d4f45d - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 28 05:02:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:45.799 261346 INFO neutron.agent.dhcp.agent [-] Starting network 887157f9-a765-40c0-8be5-1fba3ddea8f8 dhcp configuration#033[00m Nov 28 05:02:45 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:45.853 2 INFO neutron.agent.securitygroups_rpc [None req-7370f7f5-c105-405f-816d-670eb41986b4 b97430f38d544448bcb1f84d60affd50 f23b7feb8db740db9eea6302444ed3a8 - - default default] Security group member updated ['84bc6ad8-56a1-4678-950f-738b55ff6708']#033[00m Nov 28 05:02:45 localhost dnsmasq[261709]: exiting on receipt of SIGTERM Nov 28 05:02:45 localhost podman[310728]: 2025-11-28 10:02:45.976182838 +0000 UTC m=+0.059120333 container kill dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2) Nov 28 05:02:45 localhost systemd[1]: libpod-dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439.scope: Deactivated successfully. Nov 28 05:02:46 localhost podman[310742]: 2025-11-28 10:02:46.045514843 +0000 UTC m=+0.051689106 container died dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:02:46 localhost podman[310742]: 2025-11-28 10:02:46.076336217 +0000 UTC m=+0.082510460 container cleanup dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:02:46 localhost systemd[1]: libpod-conmon-dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439.scope: Deactivated successfully. Nov 28 05:02:46 localhost podman[310743]: 2025-11-28 10:02:46.133483408 +0000 UTC m=+0.134775241 container remove dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:02:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:46.173 261346 INFO neutron.agent.linux.ip_lib [-] Device tap8af1236c-20 cannot be used as it has no MAC address#033[00m Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.193 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost kernel: device tap8af1236c-20 entered promiscuous mode Nov 28 05:02:46 localhost ovn_controller[152726]: 2025-11-28T10:02:46Z|00081|binding|INFO|Claiming lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf for this chassis. Nov 28 05:02:46 localhost NetworkManager[5965]: [1764324166.2004] manager: (tap8af1236c-20): new Generic device (/org/freedesktop/NetworkManager/Devices/22) Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.200 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost ovn_controller[152726]: 2025-11-28T10:02:46Z|00082|binding|INFO|8af1236c-205e-4af9-a882-ccde7f9d3ecf: Claiming unknown Nov 28 05:02:46 localhost systemd-udevd[310777]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:02:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:46.209 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-887157f9-a765-40c0-8be5-1fba3ddea8f8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-887157f9-a765-40c0-8be5-1fba3ddea8f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dda653c53224db086060962b0702694', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5520a81-bbe1-4feb-9859-6165eafc855d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[], logical_port=8af1236c-205e-4af9-a882-ccde7f9d3ecf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:02:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:46.211 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8af1236c-205e-4af9-a882-ccde7f9d3ecf in datapath 887157f9-a765-40c0-8be5-1fba3ddea8f8 bound to our chassis#033[00m Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.213 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost ovn_controller[152726]: 2025-11-28T10:02:46Z|00083|binding|INFO|Setting lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf up in Southbound Nov 28 05:02:46 localhost ovn_controller[152726]: 2025-11-28T10:02:46Z|00084|binding|INFO|Setting lport 8af1236c-205e-4af9-a882-ccde7f9d3ecf ovn-installed in OVS Nov 28 05:02:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:46.214 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port 443f831a-83a9-4df5-adbb-6fdf4d706460 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:02:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:46.214 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 887157f9-a765-40c0-8be5-1fba3ddea8f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:02:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:46.215 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c7c14fe3-1a44-497a-9369-16a7068ec51c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.231 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost journal[228057]: ethtool ioctl error on tap8af1236c-20: No such device Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.267 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.297 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost systemd[1]: var-lib-containers-storage-overlay-f7cff75508476df411b334ef64aedbb65646b8067d2b7c094a8dcb894216f571-merged.mount: Deactivated successfully. Nov 28 05:02:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dff024c0a8ef5b75fbea3dbf531de7255c21fd5f44b33b8a799f8e3ce0ffd439-userdata-shm.mount: Deactivated successfully. Nov 28 05:02:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:46.563 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:02:46 localhost nova_compute[280168]: 2025-11-28 10:02:46.586 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:47 localhost nova_compute[280168]: 2025-11-28 10:02:47.019 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:47 localhost podman[310844]: Nov 28 05:02:47 localhost podman[310844]: 2025-11-28 10:02:47.083607638 +0000 UTC m=+0.090839955 container create d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 05:02:47 localhost systemd[1]: Started libpod-conmon-d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62.scope. Nov 28 05:02:47 localhost systemd[1]: tmp-crun.KFYzr7.mount: Deactivated successfully. Nov 28 05:02:47 localhost podman[310844]: 2025-11-28 10:02:47.043801088 +0000 UTC m=+0.051033435 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:02:47 localhost systemd[1]: Started libcrun container. Nov 28 05:02:47 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a641b960bfac5f099829c6c40c021109cd7c4b369b639859fdabca2fc43676f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:02:47 localhost podman[310844]: 2025-11-28 10:02:47.163490217 +0000 UTC m=+0.170722524 container init d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:02:47 localhost podman[310844]: 2025-11-28 10:02:47.172563004 +0000 UTC m=+0.179795311 container start d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:02:47 localhost dnsmasq[310862]: started, version 2.85 cachesize 150 Nov 28 05:02:47 localhost dnsmasq[310862]: DNS service limited to local subnets Nov 28 05:02:47 localhost dnsmasq[310862]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:02:47 localhost dnsmasq[310862]: warning: no upstream servers configured Nov 28 05:02:47 localhost dnsmasq-dhcp[310862]: DHCP, static leases only on 192.168.122.0, lease time 1d Nov 28 05:02:47 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 6 addresses Nov 28 05:02:47 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:47 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.236 261346 INFO neutron.agent.dhcp.agent [None req-60eaec6b-f7eb-4ede-8f00-8df3698f4ff6 - - - - - -] Finished network 887157f9-a765-40c0-8be5-1fba3ddea8f8 dhcp configuration#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.237 261346 INFO neutron.agent.dhcp.agent [None req-694fd91a-974f-4128-82f1-92e341d4f45d - - - - - -] Synchronizing state complete#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.241 261346 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.dhcp_release_cmd', '--privsep_sock_path', '/tmp/tmp6a4n70_d/privsep.sock']#033[00m Nov 28 05:02:47 localhost nova_compute[280168]: 2025-11-28 10:02:47.333 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:47 localhost dnsmasq[310272]: exiting on receipt of SIGTERM Nov 28 05:02:47 localhost podman[310882]: 2025-11-28 10:02:47.504012402 +0000 UTC m=+0.056554924 container kill 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:02:47 localhost systemd[1]: libpod-2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f.scope: Deactivated successfully. Nov 28 05:02:47 localhost podman[310896]: 2025-11-28 10:02:47.553104837 +0000 UTC m=+0.040107220 container died 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:02:47 localhost podman[310896]: 2025-11-28 10:02:47.57538466 +0000 UTC m=+0.062387013 container cleanup 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:02:47 localhost systemd[1]: libpod-conmon-2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f.scope: Deactivated successfully. Nov 28 05:02:47 localhost podman[310898]: 2025-11-28 10:02:47.63704604 +0000 UTC m=+0.119165774 container remove 2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-02080985-b864-4a8c-99f6-15cd1e3b9bee, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 05:02:47 localhost nova_compute[280168]: 2025-11-28 10:02:47.651 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:02:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 9.0 KiB/s wr, 19 op/s Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.726 261346 INFO neutron.agent.dhcp.agent [None req-3562f759-3584-4414-aa3a-1f377c05b5cf - - - - - -] DHCP configuration for ports {'50fa6f67-abd9-48d7-aedb-8ca08cff0a66', '4f09ff74-6c86-4b4e-b350-59898c763592', 'c11672ac-31d9-4e35-992c-9c2cc8fbd9ff', '4a0a3326-6d12-4d57-91f4-2bd267c644b1', '57c70dff-855f-436b-a33c-5f3b79153011', '962ab66e-6f28-460c-ba48-6f8d97c72fc1', '4943dbb9-4fdb-4880-be61-1585f95a0a04', '8af1236c-205e-4af9-a882-ccde7f9d3ecf', 'a523cd52-834b-474e-925b-8a0b6c6f8679', '89beab0b-910c-4e5d-abc6-3023f325656d', '469c6d02-2f81-4843-aa65-1cc5bd3e1c08'} is completed#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.902 261346 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.801 310924 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.806 310924 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.810 310924 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m Nov 28 05:02:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:47.811 310924 INFO oslo.privsep.daemon [-] privsep daemon running as pid 310924#033[00m Nov 28 05:02:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e125 e125: 6 total, 6 up, 6 in Nov 28 05:02:48 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:48.058 261346 INFO neutron.agent.dhcp.agent [None req-a5aa7db5-0eef-457b-b419-306f44d27e99 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:02:48 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:48.059 261346 INFO neutron.agent.dhcp.agent [None req-a5aa7db5-0eef-457b-b419-306f44d27e99 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:02:48 localhost dnsmasq-dhcp[310862]: DHCPRELEASE(tap8af1236c-20) 192.168.122.189 fa:16:3e:a9:f5:f2 Nov 28 05:02:48 localhost nova_compute[280168]: 2025-11-28 10:02:48.365 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:48 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:48.391 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:02:48 localhost systemd[1]: var-lib-containers-storage-overlay-9cb36eecf92e193033a7d68ca6fa47ce64881bf085c7d0c1c8fd7557fceaaf3f-merged.mount: Deactivated successfully. Nov 28 05:02:48 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2811a8561213d32d0b58d9ae5adfddbd0eddc356f936ada681729b09146dc38f-userdata-shm.mount: Deactivated successfully. Nov 28 05:02:48 localhost systemd[1]: run-netns-qdhcp\x2d02080985\x2db864\x2d4a8c\x2d99f6\x2d15cd1e3b9bee.mount: Deactivated successfully. Nov 28 05:02:48 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:02:48 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:48 localhost podman[310946]: 2025-11-28 10:02:48.665910013 +0000 UTC m=+0.059672670 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:02:48 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:49 localhost dnsmasq-dhcp[310862]: DHCPRELEASE(tap8af1236c-20) 192.168.122.232 fa:16:3e:dd:0e:89 Nov 28 05:02:49 localhost nova_compute[280168]: 2025-11-28 10:02:49.480 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:49 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:02:49 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:49 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:49 localhost podman[310985]: 2025-11-28 10:02:49.676352671 +0000 UTC m=+0.068454890 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 225 MiB data, 895 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 3.5 KiB/s wr, 18 op/s Nov 28 05:02:50 localhost dnsmasq-dhcp[310862]: DHCPRELEASE(tap8af1236c-20) 192.168.122.249 fa:16:3e:d2:2a:94 Nov 28 05:02:50 localhost nova_compute[280168]: 2025-11-28 10:02:50.425 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:50 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:50 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:50 localhost podman[311025]: 2025-11-28 10:02:50.683478877 +0000 UTC m=+0.054067448 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 28 05:02:50 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:50.846 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:02:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:50.847 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:02:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:02:50.847 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:02:50 localhost dnsmasq-dhcp[310862]: DHCPRELEASE(tap8af1236c-20) 192.168.122.181 fa:16:3e:be:72:98 Nov 28 05:02:51 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:51.017 2 INFO neutron.agent.securitygroups_rpc [req-bb7f0ac8-504e-4783-80de-f00563b1098a req-aad0b688-0986-452a-b92d-7d53ff4d1361 75ac6a26227c40ba81e61e610018d23f 1f9b84b894e641c4bee3ebcd1409ad9f - - default default] Security group member updated ['6deb8732-9203-448a-b0a5-cf6a0375d009']#033[00m Nov 28 05:02:51 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:02:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:51 localhost podman[311063]: 2025-11-28 10:02:51.414167821 +0000 UTC m=+0.059115392 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:02:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:02:51 localhost podman[311077]: 2025-11-28 10:02:51.542547206 +0000 UTC m=+0.093901979 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 05:02:51 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:51.580 261346 INFO neutron.agent.dhcp.agent [None req-4a726320-3439-44e2-ba79-4c07dcfdd571 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:50Z, description=, device_id=3d98c3ed-07f5-41b6-82be-a697b1dc8144, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=6633904c-1896-441a-b503-d755272315ca, ip_allocation=immediate, mac_address=fa:16:3e:64:dd:c0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1135, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:50Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:51 localhost podman[311077]: 2025-11-28 10:02:51.588472473 +0000 UTC m=+0.139827216 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6) Nov 28 05:02:51 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:02:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 6.4 KiB/s wr, 84 op/s Nov 28 05:02:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:51 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:51 localhost podman[311119]: 2025-11-28 10:02:51.802296937 +0000 UTC m=+0.062124826 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:02:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:52.126 261346 INFO neutron.agent.dhcp.agent [None req-323955ca-bd6e-47cb-b147-a3567fe93a13 - - - - - -] DHCP configuration for ports {'6633904c-1896-441a-b503-d755272315ca'} is completed#033[00m Nov 28 05:02:52 localhost nova_compute[280168]: 2025-11-28 10:02:52.336 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:53 localhost systemd[1]: tmp-crun.BYlQaP.mount: Deactivated successfully. Nov 28 05:02:53 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:53 localhost podman[311156]: 2025-11-28 10:02:53.248343726 +0000 UTC m=+0.068318716 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:02:53 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:53 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:53 localhost nova_compute[280168]: 2025-11-28 10:02:53.370 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 5.6 KiB/s wr, 73 op/s Nov 28 05:02:53 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:53.895 2 INFO neutron.agent.securitygroups_rpc [None req-ca5b8c5c-4a7b-4773-b7e8-8e9eb8c79737 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:02:53 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:53 localhost podman[311193]: 2025-11-28 10:02:53.986289802 +0000 UTC m=+0.058702840 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:02:53 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:53 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:54 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:02:54 localhost podman[311232]: 2025-11-28 10:02:54.881468168 +0000 UTC m=+0.062990802 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:02:54 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:54 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:54 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 e126: 6 total, 6 up, 6 in Nov 28 05:02:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 73 op/s Nov 28 05:02:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:55.687 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:55Z, description=, device_id=96675921-219e-4a6b-80f7-35d4cc5cbdb6, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4a9f17de-1903-4782-9fbb-eb87966cc66d, ip_allocation=immediate, mac_address=fa:16:3e:2e:27:e5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1176, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:55Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:55 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:55.796 2 INFO neutron.agent.securitygroups_rpc [None req-0a1122e3-48a9-4fdd-9791-f33fb613b799 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:02:55 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:55 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:55 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:55 localhost podman[311270]: 2025-11-28 10:02:55.967488642 +0000 UTC m=+0.061152164 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:02:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:56.215 261346 INFO neutron.agent.dhcp.agent [None req-b42314b3-6a3d-443c-a4d3-31c6d689860a - - - - - -] DHCP configuration for ports {'4a9f17de-1903-4782-9fbb-eb87966cc66d'} is completed#033[00m Nov 28 05:02:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:02:57 localhost nova_compute[280168]: 2025-11-28 10:02:57.338 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:57 localhost openstack_network_exporter[240973]: ERROR 10:02:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:02:57 localhost openstack_network_exporter[240973]: ERROR 10:02:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:02:57 localhost openstack_network_exporter[240973]: ERROR 10:02:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:02:57 localhost openstack_network_exporter[240973]: ERROR 10:02:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:02:57 localhost openstack_network_exporter[240973]: Nov 28 05:02:57 localhost openstack_network_exporter[240973]: ERROR 10:02:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:02:57 localhost openstack_network_exporter[240973]: Nov 28 05:02:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 2.9 KiB/s wr, 60 op/s Nov 28 05:02:57 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:57.786 2 INFO neutron.agent.securitygroups_rpc [None req-2d11ad2b-bc0b-4803-8bd7-bbf5b227318c 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:02:58 localhost nova_compute[280168]: 2025-11-28 10:02:58.372 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:02:58 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:02:58 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:58 localhost podman[311307]: 2025-11-28 10:02:58.694228893 +0000 UTC m=+0.062940281 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:02:58 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:58 localhost podman[239012]: time="2025-11-28T10:02:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:02:58 localhost neutron_sriov_agent[254415]: 2025-11-28 10:02:58.916 2 INFO neutron.agent.securitygroups_rpc [None req-15306174-a853-47d1-9333-4213f5fad357 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:02:58 localhost podman[239012]: @ - - [28/Nov/2025:10:02:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:02:58 localhost podman[239012]: @ - - [28/Nov/2025:10:02:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19206 "" "Go-http-client/1.1" Nov 28 05:02:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.8 KiB/s wr, 58 op/s Nov 28 05:02:59 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:02:59.717 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:02:59Z, description=, device_id=6d6f5f2b-73ff-4ed3-b208-e62261a8f605, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=374a4d89-2998-47b5-9fe9-05108f11754c, ip_allocation=immediate, mac_address=fa:16:3e:1c:ee:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1182, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:02:59Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:02:59 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:02:59 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:02:59 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:02:59 localhost podman[311345]: 2025-11-28 10:02:59.936821806 +0000 UTC m=+0.064963032 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:03:00 localhost dnsmasq-dhcp[310862]: DHCPRELEASE(tap8af1236c-20) 192.168.122.179 fa:16:3e:58:c4:db Nov 28 05:03:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:00.186 261346 INFO neutron.agent.dhcp.agent [None req-cb78e981-3191-4efd-82ce-484e1a5825f8 - - - - - -] DHCP configuration for ports {'374a4d89-2998-47b5-9fe9-05108f11754c'} is completed#033[00m Nov 28 05:03:00 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:00.212 2 INFO neutron.agent.securitygroups_rpc [None req-f1e38bd4-3201-4ca6-aca5-e6cf8d3e47ff 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:00 localhost nova_compute[280168]: 2025-11-28 10:03:00.509 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:00 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:03:00 localhost podman[311383]: 2025-11-28 10:03:00.60063203 +0000 UTC m=+0.064684244 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:03:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:00.782 261346 INFO neutron.agent.dhcp.agent [None req-ff55e36f-a28c-42c2-9334-23beac7ec5c5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:00Z, description=, device_id=b85a61c5-6e26-4d8b-a681-62813c171222, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=11686e4b-1bc0-4947-bdac-2b1663b1d9cc, ip_allocation=immediate, mac_address=fa:16:3e:14:a7:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1183, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:03:00Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:03:00 localhost nova_compute[280168]: 2025-11-28 10:03:00.974 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:00 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:03:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:00 localhost podman[311420]: 2025-11-28 10:03:00.996441462 +0000 UTC m=+0.056443602 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Nov 28 05:03:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:01 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:01.248 261346 INFO neutron.agent.dhcp.agent [None req-f24cbd90-34a2-4a4a-9310-3de1de928be0 - - - - - -] DHCP configuration for ports {'11686e4b-1bc0-4947-bdac-2b1663b1d9cc'} is completed#033[00m Nov 28 05:03:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:01.474 2 INFO neutron.agent.securitygroups_rpc [None req-06213f27-8bbf-4f60-8df9-0ce6274952ed 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:03:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:03:01 localhost systemd[1]: tmp-crun.BSO3gv.mount: Deactivated successfully. Nov 28 05:03:01 localhost podman[311442]: 2025-11-28 10:03:01.993493168 +0000 UTC m=+0.092768574 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:03:02 localhost podman[311442]: 2025-11-28 10:03:02.007052174 +0000 UTC m=+0.106327570 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 05:03:02 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:03:02 localhost podman[311444]: 2025-11-28 10:03:02.062286627 +0000 UTC m=+0.148237154 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible) Nov 28 05:03:02 localhost systemd[1]: tmp-crun.3gvKF6.mount: Deactivated successfully. Nov 28 05:03:02 localhost podman[311450]: 2025-11-28 10:03:02.092554115 +0000 UTC m=+0.178992757 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:03:02 localhost podman[311450]: 2025-11-28 10:03:02.107423731 +0000 UTC m=+0.193862413 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:03:02 localhost podman[311444]: 2025-11-28 10:03:02.120410609 +0000 UTC m=+0.206361136 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:03:02 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:03:02 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:03:02 localhost podman[311443]: 2025-11-28 10:03:02.190330152 +0000 UTC m=+0.284666686 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 05:03:02 localhost podman[311443]: 2025-11-28 10:03:02.247917126 +0000 UTC m=+0.342253690 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:03:02 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:03:02 localhost nova_compute[280168]: 2025-11-28 10:03:02.371 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:03.314 2 INFO neutron.agent.securitygroups_rpc [None req-58637b77-ae6c-405f-99c5-e20fa41f4923 f6a5516f43fb48ebaa16e4040dd82b84 cd7c5d213d924d3d9d4428db9d286082 - - default default] Security group member updated ['9f37640b-8d78-40f1-9b7c-3ec3fef04776']#033[00m Nov 28 05:03:03 localhost nova_compute[280168]: 2025-11-28 10:03:03.375 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:03:03 localhost podman[311527]: 2025-11-28 10:03:03.973702988 +0000 UTC m=+0.084842171 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:03:04 localhost podman[311527]: 2025-11-28 10:03:04.01259046 +0000 UTC m=+0.123729583 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:03:04 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:03:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:04.510 2 INFO neutron.agent.securitygroups_rpc [None req-9701a6f5-02eb-46da-bd51-76f4153e4e2b db6b2950549b47a2a2693ecffb5083c4 cd9b97e6d04840f3a546b260a8ee9b24 - - default default] Security group member updated ['7c5e1d73-494f-47ff-9f16-a2cff6e79638']#033[00m Nov 28 05:03:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:04.772 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:03Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=5f13681b-a9b0-4110-8479-3b0babad0289, ip_allocation=immediate, mac_address=fa:16:3e:72:1c:49, name=tempest-RoutersAdminNegativeTest-942817881, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=True, project_id=cd9b97e6d04840f3a546b260a8ee9b24, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7c5e1d73-494f-47ff-9f16-a2cff6e79638'], standard_attr_id=1197, status=DOWN, tags=[], tenant_id=cd9b97e6d04840f3a546b260a8ee9b24, updated_at=2025-11-28T10:03:03Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:03:05 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:03:05 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:05 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:05 localhost podman[311568]: 2025-11-28 10:03:05.250208732 +0000 UTC m=+0.051515370 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:03:05 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:05.414 261346 INFO neutron.agent.dhcp.agent [None req-a8982697-18de-44b5-bb9e-b065c2789480 - - - - - -] DHCP configuration for ports {'5f13681b-a9b0-4110-8479-3b0babad0289'} is completed#033[00m Nov 28 05:03:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:03:05 Nov 28 05:03:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:03:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:03:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['images', 'volumes', '.mgr', 'vms', 'manila_metadata', 'backups', 'manila_data'] Nov 28 05:03:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:03:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:03:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:03:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:03:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:03:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:03:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:03:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:03:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:03:06 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:06.153 2 INFO neutron.agent.securitygroups_rpc [None req-42b0499f-37f4-4061-a4df-d49e7a70a2c4 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:06 localhost nova_compute[280168]: 2025-11-28 10:03:06.689 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:06 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:03:06 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:06 localhost podman[311607]: 2025-11-28 10:03:06.768437673 +0000 UTC m=+0.063626621 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:03:06 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:07 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:07.282 2 INFO neutron.agent.securitygroups_rpc [None req-374ec1da-a6ee-43ec-aeb4-2a3037224eb2 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:07 localhost nova_compute[280168]: 2025-11-28 10:03:07.374 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:07 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:07.575 2 INFO neutron.agent.securitygroups_rpc [None req-5f1d0dc7-c78c-4e13-8de3-56bbcc932539 db6b2950549b47a2a2693ecffb5083c4 cd9b97e6d04840f3a546b260a8ee9b24 - - default default] Security group member updated ['7c5e1d73-494f-47ff-9f16-a2cff6e79638']#033[00m Nov 28 05:03:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:07 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:03:07 localhost podman[311646]: 2025-11-28 10:03:07.784165574 +0000 UTC m=+0.047475486 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:03:07 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:07 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:03:07 localhost systemd[1]: tmp-crun.4t67ti.mount: Deactivated successfully. Nov 28 05:03:07 localhost podman[311659]: 2025-11-28 10:03:07.921140601 +0000 UTC m=+0.104706330 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 05:03:07 localhost podman[311659]: 2025-11-28 10:03:07.961486008 +0000 UTC m=+0.145051727 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:03:07 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:03:08 localhost nova_compute[280168]: 2025-11-28 10:03:08.378 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:08.756 2 INFO neutron.agent.securitygroups_rpc [None req-be0492e2-ff74-4faa-8249-9d4640988efe f6a5516f43fb48ebaa16e4040dd82b84 cd7c5d213d924d3d9d4428db9d286082 - - default default] Security group member updated ['9f37640b-8d78-40f1-9b7c-3ec3fef04776']#033[00m Nov 28 05:03:08 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:08.786 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:08Z, description=, device_id=39e4d76e-c3ed-4f2c-ab49-82a582f6ea5b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2a338f3b-cc3c-43b0-9669-c423ab6a0eb7, ip_allocation=immediate, mac_address=fa:16:3e:98:e0:c3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1228, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:03:08Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:03:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:08.968 2 INFO neutron.agent.securitygroups_rpc [None req-f639edd5-343d-4ae3-8fa2-2054bebb498d 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:09 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:03:09 localhost podman[311703]: 2025-11-28 10:03:09.001353868 +0000 UTC m=+0.048921710 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:03:09 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:09 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:09 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:09.286 261346 INFO neutron.agent.dhcp.agent [None req-9a8ca310-e034-4fca-bbb3-96535be556ed - - - - - -] DHCP configuration for ports {'2a338f3b-cc3c-43b0-9669-c423ab6a0eb7'} is completed#033[00m Nov 28 05:03:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:11.438 2 INFO neutron.agent.securitygroups_rpc [None req-cc15aeb8-86ce-4ade-b16e-7c5f404511cd 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:12 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:12.075 2 INFO neutron.agent.securitygroups_rpc [None req-93eb68a4-7d7e-4f26-af38-fff447267025 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:12 localhost nova_compute[280168]: 2025-11-28 10:03:12.378 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:12 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:03:12 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:12 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:12 localhost podman[311743]: 2025-11-28 10:03:12.427914606 +0000 UTC m=+0.067990634 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:03:12 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:12.509 2 INFO neutron.agent.securitygroups_rpc [None req-0547c360-35fd-496e-9dbb-6212e2de25bb 7dd8e21cfc81423e88248cb5fa529d85 2cfa67078b8440d0bf985b2a5e0e5558 - - default default] Security group member updated ['99b6bdbb-5b32-49a7-af1f-7638d6a8bc5d']#033[00m Nov 28 05:03:13 localhost nova_compute[280168]: 2025-11-28 10:03:13.383 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:13 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:13.989 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:13Z, description=, device_id=69c61a5b-ec94-4bbf-a961-0b942c346de5, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e282245c-a634-4e4d-b9c8-31a4620b1751, ip_allocation=immediate, mac_address=fa:16:3e:cc:a1:00, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1258, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:03:13Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:03:14 localhost systemd[1]: tmp-crun.qkAkIm.mount: Deactivated successfully. Nov 28 05:03:14 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:03:14 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:14 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:14 localhost podman[311780]: 2025-11-28 10:03:14.275668297 +0000 UTC m=+0.068857082 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:03:14 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:14.699 261346 INFO neutron.agent.dhcp.agent [None req-d5fdbe78-a1e0-4b36-84b6-e17319936665 - - - - - -] DHCP configuration for ports {'e282245c-a634-4e4d-b9c8-31a4620b1751'} is completed#033[00m Nov 28 05:03:15 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:15.461 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:14Z, description=, device_id=eabd79f3-7436-4767-9b03-0bae1c4f5088, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=9e71845f-d23d-4701-82b5-febedf9d3c44, ip_allocation=immediate, mac_address=fa:16:3e:06:b4:45, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1265, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:03:14Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:03:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:15 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:03:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:15 localhost podman[311817]: 2025-11-28 10:03:15.714426052 +0000 UTC m=+0.067116738 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:03:16 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:16.011 261346 INFO neutron.agent.dhcp.agent [None req-b9372eed-0d6a-498a-9bb3-5cd54850ec5c - - - - - -] DHCP configuration for ports {'9e71845f-d23d-4701-82b5-febedf9d3c44'} is completed#033[00m Nov 28 05:03:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:17 localhost nova_compute[280168]: 2025-11-28 10:03:17.379 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:17 localhost nova_compute[280168]: 2025-11-28 10:03:17.678 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:18 localhost nova_compute[280168]: 2025-11-28 10:03:18.413 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 05:03:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 05:03:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 05:03:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 05:03:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 05:03:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 05:03:19 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:19.306 2 INFO neutron.agent.securitygroups_rpc [None req-0bd438b8-b072-41d3-bddf-9588300a9670 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:19 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:03:19 localhost podman[311978]: 2025-11-28 10:03:19.587379211 +0000 UTC m=+0.060271949 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 05:03:19 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:19 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:03:19 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:03:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:03:19 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:03:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:03:19 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 6cf45a1b-c23e-4bbd-a248-a5d51e2e61e5 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:03:19 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 6cf45a1b-c23e-4bbd-a248-a5d51e2e61e5 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:03:19 localhost ceph-mgr[286188]: [progress INFO root] Completed event 6cf45a1b-c23e-4bbd-a248-a5d51e2e61e5 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:03:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:03:19 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:03:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v194: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:19 localhost podman[312017]: 2025-11-28 10:03:19.93257657 +0000 UTC m=+0.060219896 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 05:03:19 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:03:19 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:19 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:19 localhost nova_compute[280168]: 2025-11-28 10:03:19.969 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:20 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:03:20 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:20 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:03:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:03:20 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:20.923 2 INFO neutron.agent.securitygroups_rpc [None req-cc447c81-1a1f-4f5d-aa14-abdbefdf4620 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:03:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:03:21 localhost podman[312054]: 2025-11-28 10:03:21.978947399 +0000 UTC m=+0.085802501 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41) Nov 28 05:03:22 localhost podman[312054]: 2025-11-28 10:03:22.01785818 +0000 UTC m=+0.124713312 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc., release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc.) Nov 28 05:03:22 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:03:22 localhost nova_compute[280168]: 2025-11-28 10:03:22.382 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.417 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.439 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v196: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:23.774 261346 INFO neutron.agent.linux.ip_lib [None req-9ba95e0d-ea20-4a29-9f4e-0fb1eb5c67d2 - - - - - -] Device tap6c0b5ff2-df cannot be used as it has no MAC address#033[00m Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.798 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost kernel: device tap6c0b5ff2-df entered promiscuous mode Nov 28 05:03:23 localhost NetworkManager[5965]: [1764324203.8090] manager: (tap6c0b5ff2-df): new Generic device (/org/freedesktop/NetworkManager/Devices/23) Nov 28 05:03:23 localhost ovn_controller[152726]: 2025-11-28T10:03:23Z|00085|binding|INFO|Claiming lport 6c0b5ff2-df89-4fe3-9e64-c927114a583e for this chassis. Nov 28 05:03:23 localhost ovn_controller[152726]: 2025-11-28T10:03:23Z|00086|binding|INFO|6c0b5ff2-df89-4fe3-9e64-c927114a583e: Claiming unknown Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.813 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost systemd-udevd[312084]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:03:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:23.823 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50a1392ce96c4024bcd36a3df403ca29', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94e1d43d-b7b9-4cde-a1cb-52c6e6527f88, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6c0b5ff2-df89-4fe3-9e64-c927114a583e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:03:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:23.824 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 6c0b5ff2-df89-4fe3-9e64-c927114a583e in datapath 6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694 bound to our chassis#033[00m Nov 28 05:03:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:23.825 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:03:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:23.826 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[4f2f28f8-e204-4c09-add3-cf93528d8693]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost ovn_controller[152726]: 2025-11-28T10:03:23Z|00087|binding|INFO|Setting lport 6c0b5ff2-df89-4fe3-9e64-c927114a583e ovn-installed in OVS Nov 28 05:03:23 localhost ovn_controller[152726]: 2025-11-28T10:03:23Z|00088|binding|INFO|Setting lport 6c0b5ff2-df89-4fe3-9e64-c927114a583e up in Southbound Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.856 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.856 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost journal[228057]: ethtool ioctl error on tap6c0b5ff2-df: No such device Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.899 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost nova_compute[280168]: 2025-11-28 10:03:23.932 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:23 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:23.950 2 INFO neutron.agent.securitygroups_rpc [None req-8c468440-8245-4890-91bf-66327309dae3 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:24 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:24.625 2 INFO neutron.agent.securitygroups_rpc [None req-6563d2b7-ae08-45e8-8b76-40044d8bfa2e 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:24 localhost podman[312155]: Nov 28 05:03:24 localhost podman[312155]: 2025-11-28 10:03:24.855991265 +0000 UTC m=+0.101570635 container create 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:03:24 localhost podman[312155]: 2025-11-28 10:03:24.804211127 +0000 UTC m=+0.049790547 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:03:24 localhost systemd[1]: Started libpod-conmon-68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889.scope. Nov 28 05:03:24 localhost systemd[1]: Started libcrun container. Nov 28 05:03:24 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ea26693fddbc198b895040c73bb855dbac4baf54a7312418b5e6c595b6259ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:03:24 localhost podman[312155]: 2025-11-28 10:03:24.941147664 +0000 UTC m=+0.186727064 container init 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:03:24 localhost podman[312155]: 2025-11-28 10:03:24.955062311 +0000 UTC m=+0.200641681 container start 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0) Nov 28 05:03:24 localhost dnsmasq[312173]: started, version 2.85 cachesize 150 Nov 28 05:03:24 localhost dnsmasq[312173]: DNS service limited to local subnets Nov 28 05:03:24 localhost dnsmasq[312173]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:03:24 localhost dnsmasq[312173]: warning: no upstream servers configured Nov 28 05:03:24 localhost dnsmasq-dhcp[312173]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Nov 28 05:03:24 localhost dnsmasq[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/addn_hosts - 0 addresses Nov 28 05:03:24 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/host Nov 28 05:03:24 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/opts Nov 28 05:03:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:25.160 261346 INFO neutron.agent.dhcp.agent [None req-23033da1-cc7d-4b74-afae-6e37e0f29c88 - - - - - -] DHCP configuration for ports {'eb2477c9-7d94-49d1-83d4-9980e36bbbef'} is completed#033[00m Nov 28 05:03:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail Nov 28 05:03:25 localhost systemd[1]: tmp-crun.yJGL9j.mount: Deactivated successfully. Nov 28 05:03:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e127 e127: 6 total, 6 up, 6 in Nov 28 05:03:26 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:26.730 2 INFO neutron.agent.securitygroups_rpc [None req-a6c40294-bcff-4fbb-89ad-bea0e8a1937c 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:26.922 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:26Z, description=, device_id=2c4fdf00-3fa5-4a24-a05d-8ff5ec980545, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0fbe6bd7-f895-4e05-ab45-c6c2712e010d, ip_allocation=immediate, mac_address=fa:16:3e:da:00:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:03:19Z, description=, dns_domain=, id=6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1313739008, port_security_enabled=True, project_id=50a1392ce96c4024bcd36a3df403ca29, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21540, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1295, status=ACTIVE, subnets=['63402f8f-09ed-4fd7-94cd-e097b6e23efa'], tags=[], tenant_id=50a1392ce96c4024bcd36a3df403ca29, updated_at=2025-11-28T10:03:22Z, vlan_transparent=None, network_id=6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, port_security_enabled=False, project_id=50a1392ce96c4024bcd36a3df403ca29, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1334, status=DOWN, tags=[], tenant_id=50a1392ce96c4024bcd36a3df403ca29, updated_at=2025-11-28T10:03:26Z on network 6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694#033[00m Nov 28 05:03:27 localhost dnsmasq[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/addn_hosts - 1 addresses Nov 28 05:03:27 localhost podman[312191]: 2025-11-28 10:03:27.133477485 +0000 UTC m=+0.068895042 container kill 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Nov 28 05:03:27 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/host Nov 28 05:03:27 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/opts Nov 28 05:03:27 localhost nova_compute[280168]: 2025-11-28 10:03:27.385 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:27.448 261346 INFO neutron.agent.dhcp.agent [None req-281fd3a1-7453-483c-8f48-a3f681ad2cf6 - - - - - -] DHCP configuration for ports {'0fbe6bd7-f895-4e05-ab45-c6c2712e010d'} is completed#033[00m Nov 28 05:03:27 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:27.497 2 INFO neutron.agent.securitygroups_rpc [None req-9bac993d-08a2-4a7a-9741-5f6e8a523396 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:27 localhost openstack_network_exporter[240973]: ERROR 10:03:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:03:27 localhost openstack_network_exporter[240973]: ERROR 10:03:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:03:27 localhost openstack_network_exporter[240973]: ERROR 10:03:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:03:27 localhost openstack_network_exporter[240973]: ERROR 10:03:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:03:27 localhost openstack_network_exporter[240973]: Nov 28 05:03:27 localhost openstack_network_exporter[240973]: ERROR 10:03:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:03:27 localhost openstack_network_exporter[240973]: Nov 28 05:03:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 1.6 KiB/s wr, 18 op/s Nov 28 05:03:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e128 e128: 6 total, 6 up, 6 in Nov 28 05:03:28 localhost nova_compute[280168]: 2025-11-28 10:03:28.422 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:28 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:28.825 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:26Z, description=, device_id=2c4fdf00-3fa5-4a24-a05d-8ff5ec980545, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0fbe6bd7-f895-4e05-ab45-c6c2712e010d, ip_allocation=immediate, mac_address=fa:16:3e:da:00:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:03:19Z, description=, dns_domain=, id=6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1313739008, port_security_enabled=True, project_id=50a1392ce96c4024bcd36a3df403ca29, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21540, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1295, status=ACTIVE, subnets=['63402f8f-09ed-4fd7-94cd-e097b6e23efa'], tags=[], tenant_id=50a1392ce96c4024bcd36a3df403ca29, updated_at=2025-11-28T10:03:22Z, vlan_transparent=None, network_id=6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, port_security_enabled=False, project_id=50a1392ce96c4024bcd36a3df403ca29, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1334, status=DOWN, tags=[], tenant_id=50a1392ce96c4024bcd36a3df403ca29, updated_at=2025-11-28T10:03:26Z on network 6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694#033[00m Nov 28 05:03:28 localhost podman[239012]: time="2025-11-28T10:03:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:03:28 localhost podman[239012]: @ - - [28/Nov/2025:10:03:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158149 "" "Go-http-client/1.1" Nov 28 05:03:28 localhost podman[239012]: @ - - [28/Nov/2025:10:03:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19672 "" "Go-http-client/1.1" Nov 28 05:03:29 localhost dnsmasq[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/addn_hosts - 1 addresses Nov 28 05:03:29 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/host Nov 28 05:03:29 localhost podman[312229]: 2025-11-28 10:03:29.075253507 +0000 UTC m=+0.062432895 container kill 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:03:29 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/opts Nov 28 05:03:29 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:29.309 261346 INFO neutron.agent.dhcp.agent [None req-d0ad3e84-584e-4664-8c47-46a23eb9be73 - - - - - -] DHCP configuration for ports {'0fbe6bd7-f895-4e05-ab45-c6c2712e010d'} is completed#033[00m Nov 28 05:03:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.0 KiB/s wr, 23 op/s Nov 28 05:03:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e129 e129: 6 total, 6 up, 6 in Nov 28 05:03:30 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:30.078 2 INFO neutron.agent.securitygroups_rpc [None req-a538bd0d-c0aa-4d14-8c4b-26de5d170843 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:30 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:03:30 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:30 localhost podman[312267]: 2025-11-28 10:03:30.653334392 +0000 UTC m=+0.066661254 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:03:30 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 8.7 KiB/s wr, 132 op/s Nov 28 05:03:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:31 localhost nova_compute[280168]: 2025-11-28 10:03:31.897 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e130 e130: 6 total, 6 up, 6 in Nov 28 05:03:32 localhost dnsmasq[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/addn_hosts - 0 addresses Nov 28 05:03:32 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/host Nov 28 05:03:32 localhost dnsmasq-dhcp[312173]: read /var/lib/neutron/dhcp/6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694/opts Nov 28 05:03:32 localhost podman[312306]: 2025-11-28 10:03:32.190620097 +0000 UTC m=+0.066952213 container kill 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:03:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:03:32 localhost systemd[1]: tmp-crun.cAdyah.mount: Deactivated successfully. Nov 28 05:03:32 localhost podman[312323]: 2025-11-28 10:03:32.323917742 +0000 UTC m=+0.093357362 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:03:32 localhost podman[312323]: 2025-11-28 10:03:32.33363357 +0000 UTC m=+0.103073190 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:03:32 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:03:32 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:32.356 2 INFO neutron.agent.securitygroups_rpc [None req-23b7a4db-87e9-4c7d-8b9d-380815f2adcd 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:32 localhost nova_compute[280168]: 2025-11-28 10:03:32.387 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:32 localhost podman[312361]: 2025-11-28 10:03:32.396391163 +0000 UTC m=+0.070424149 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 05:03:32 localhost podman[312322]: 2025-11-28 10:03:32.419217843 +0000 UTC m=+0.192466109 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent) Nov 28 05:03:32 localhost nova_compute[280168]: 2025-11-28 10:03:32.466 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:32 localhost kernel: device tap6c0b5ff2-df left promiscuous mode Nov 28 05:03:32 localhost ovn_controller[152726]: 2025-11-28T10:03:32Z|00089|binding|INFO|Releasing lport 6c0b5ff2-df89-4fe3-9e64-c927114a583e from this chassis (sb_readonly=0) Nov 28 05:03:32 localhost ovn_controller[152726]: 2025-11-28T10:03:32Z|00090|binding|INFO|Setting lport 6c0b5ff2-df89-4fe3-9e64-c927114a583e down in Southbound Nov 28 05:03:32 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:32.474 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50a1392ce96c4024bcd36a3df403ca29', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94e1d43d-b7b9-4cde-a1cb-52c6e6527f88, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6c0b5ff2-df89-4fe3-9e64-c927114a583e) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:03:32 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:32.475 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 6c0b5ff2-df89-4fe3-9e64-c927114a583e in datapath 6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694 unbound from our chassis#033[00m Nov 28 05:03:32 localhost podman[312320]: 2025-11-28 10:03:32.475429015 +0000 UTC m=+0.251467807 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 05:03:32 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:32.476 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:03:32 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:32.477 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[901670d2-e895-4c44-9182-201ffe8c8039]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:03:32 localhost nova_compute[280168]: 2025-11-28 10:03:32.486 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:32 localhost podman[312322]: 2025-11-28 10:03:32.498350339 +0000 UTC m=+0.271598545 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125) Nov 28 05:03:32 localhost podman[312320]: 2025-11-28 10:03:32.51046753 +0000 UTC m=+0.286506392 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:03:32 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:03:32 localhost podman[312361]: 2025-11-28 10:03:32.529526594 +0000 UTC m=+0.203559610 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:03:32 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:03:32 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:03:33 localhost nova_compute[280168]: 2025-11-28 10:03:33.423 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:33.442 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:32Z, description=, device_id=6044fed0-a964-4a28-ad00-c53ecd1a7643, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=06c8fced-7e68-4f3b-b898-de91e39611ac, ip_allocation=immediate, mac_address=fa:16:3e:bf:b3:13, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1368, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:03:33Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:03:33 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:03:33 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:33 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:33 localhost podman[312431]: 2025-11-28 10:03:33.660134735 +0000 UTC m=+0.055610426 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:03:33 localhost systemd[1]: tmp-crun.9YNW19.mount: Deactivated successfully. Nov 28 05:03:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 6.0 KiB/s wr, 101 op/s Nov 28 05:03:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:33.895 261346 INFO neutron.agent.dhcp.agent [None req-9ae4ce4e-240a-49d4-a88e-f444041aed3f - - - - - -] DHCP configuration for ports {'06c8fced-7e68-4f3b-b898-de91e39611ac'} is completed#033[00m Nov 28 05:03:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:03:34 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:34.885 2 INFO neutron.agent.securitygroups_rpc [None req-a1e00e91-b063-4693-9d8b-b7a005d16694 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:34 localhost podman[312452]: 2025-11-28 10:03:34.977175419 +0000 UTC m=+0.085544822 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:03:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e131 e131: 6 total, 6 up, 6 in Nov 28 05:03:34 localhost podman[312452]: 2025-11-28 10:03:34.989415405 +0000 UTC m=+0.097784838 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:03:35 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:03:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:03:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:03:35 localhost nova_compute[280168]: 2025-11-28 10:03:35.699 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:03:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:03:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 6.0 KiB/s wr, 101 op/s Nov 28 05:03:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:03:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:03:35 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:35.856 2 INFO neutron.agent.securitygroups_rpc [None req-57c1bb59-2e0b-4157-baad-e850337ecf12 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e132 e132: 6 total, 6 up, 6 in Nov 28 05:03:36 localhost dnsmasq[312173]: exiting on receipt of SIGTERM Nov 28 05:03:36 localhost podman[312489]: 2025-11-28 10:03:36.600664056 +0000 UTC m=+0.044258757 container kill 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Nov 28 05:03:36 localhost systemd[1]: libpod-68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889.scope: Deactivated successfully. Nov 28 05:03:36 localhost podman[312503]: 2025-11-28 10:03:36.674787508 +0000 UTC m=+0.058318808 container died 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:03:36 localhost systemd[1]: tmp-crun.KIRFvb.mount: Deactivated successfully. Nov 28 05:03:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889-userdata-shm.mount: Deactivated successfully. Nov 28 05:03:36 localhost podman[312503]: 2025-11-28 10:03:36.706380287 +0000 UTC m=+0.089911547 container cleanup 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 05:03:36 localhost systemd[1]: libpod-conmon-68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889.scope: Deactivated successfully. Nov 28 05:03:36 localhost podman[312504]: 2025-11-28 10:03:36.756975977 +0000 UTC m=+0.134944516 container remove 68800f0a433642a83dde4b2a60ebb2444b0baed6eef390dd962b7b9126a9b889 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6e7f0ee3-3d47-47d5-a9ba-c47f46ffc694, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:03:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:37.135 261346 INFO neutron.agent.dhcp.agent [None req-0ed3a335-a0b9-411f-8b12-710d52ffc5de - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:03:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e133 e133: 6 total, 6 up, 6 in Nov 28 05:03:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:37.262 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:03:37 localhost nova_compute[280168]: 2025-11-28 10:03:37.390 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:37 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:37.429 2 INFO neutron.agent.securitygroups_rpc [None req-69a76c47-354e-40d8-9c7a-4acd924cbac4 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:03:37.593 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:03:37 localhost systemd[1]: var-lib-containers-storage-overlay-8ea26693fddbc198b895040c73bb855dbac4baf54a7312418b5e6c595b6259ad-merged.mount: Deactivated successfully. Nov 28 05:03:37 localhost systemd[1]: run-netns-qdhcp\x2d6e7f0ee3\x2d3d47\x2d47d5\x2da9ba\x2dc47f46ffc694.mount: Deactivated successfully. Nov 28 05:03:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 777 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 29 op/s Nov 28 05:03:37 localhost nova_compute[280168]: 2025-11-28 10:03:37.976 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:38 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:38.192 2 INFO neutron.agent.securitygroups_rpc [None req-eef6fd1f-c62c-4fce-a6ed-f73dc25767c9 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e134 e134: 6 total, 6 up, 6 in Nov 28 05:03:38 localhost nova_compute[280168]: 2025-11-28 10:03:38.427 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:03:38 localhost podman[312531]: 2025-11-28 10:03:38.968628761 +0000 UTC m=+0.080832669 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd) Nov 28 05:03:38 localhost podman[312531]: 2025-11-28 10:03:38.982361622 +0000 UTC m=+0.094565520 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 05:03:38 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:03:39 localhost nova_compute[280168]: 2025-11-28 10:03:39.235 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:39 localhost nova_compute[280168]: 2025-11-28 10:03:39.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:39 localhost nova_compute[280168]: 2025-11-28 10:03:39.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:39 localhost nova_compute[280168]: 2025-11-28 10:03:39.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:39 localhost nova_compute[280168]: 2025-11-28 10:03:39.264 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:03:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 777 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 1.9 KiB/s wr, 34 op/s Nov 28 05:03:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e135 e135: 6 total, 6 up, 6 in Nov 28 05:03:40 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:40.471 2 INFO neutron.agent.securitygroups_rpc [None req-3119b771-1e00-43fd-8d05-15e8a1d2219b 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:40 localhost nova_compute[280168]: 2025-11-28 10:03:40.566 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:41 localhost nova_compute[280168]: 2025-11-28 10:03:41.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:41 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:41.652 2 INFO neutron.agent.securitygroups_rpc [None req-1b0c80b5-e3aa-421f-ac6e-3cbc3bd6a095 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:03:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 161 MiB data, 794 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 2.9 MiB/s wr, 146 op/s Nov 28 05:03:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:42 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:42.096 2 INFO neutron.agent.securitygroups_rpc [None req-e8cd93f0-eb23-4135-97c1-cd5750f74f24 b286c38dfd0e4889806c62c7b4b9ee98 50a1392ce96c4024bcd36a3df403ca29 - - default default] Security group member updated ['b372bb98-860c-4571-936b-bf08ecbd647d']#033[00m Nov 28 05:03:42 localhost nova_compute[280168]: 2025-11-28 10:03:42.392 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:43 localhost nova_compute[280168]: 2025-11-28 10:03:43.429 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 161 MiB data, 794 MiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 2.5 MiB/s wr, 99 op/s Nov 28 05:03:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:44.198 2 INFO neutron.agent.securitygroups_rpc [None req-363c9598-0bac-406f-990f-c24334dc748e 595b5cbed3764c7a95b0ab3634e5becb 8c66e098e4fb4a349dc2bb4293454135 - - default default] Security group member updated ['f75eb612-183d-4664-84c2-1d15534e163f']#033[00m Nov 28 05:03:44 localhost nova_compute[280168]: 2025-11-28 10:03:44.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:44 localhost nova_compute[280168]: 2025-11-28 10:03:44.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:03:44 localhost nova_compute[280168]: 2025-11-28 10:03:44.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:03:44 localhost nova_compute[280168]: 2025-11-28 10:03:44.273 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:03:44 localhost nova_compute[280168]: 2025-11-28 10:03:44.957 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:44.960 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:03:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:44.961 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:03:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e136 e136: 6 total, 6 up, 6 in Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.257 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.258 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.259 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.259 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.260 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:03:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:03:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3641885735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.699 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:03:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 161 MiB data, 794 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.2 MiB/s wr, 86 op/s Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.899 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.901 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11582MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.901 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:03:45 localhost nova_compute[280168]: 2025-11-28 10:03:45.902 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.009 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.010 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.033 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.486 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.491 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.506 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.510 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:03:46 localhost nova_compute[280168]: 2025-11-28 10:03:46.511 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:03:46 localhost systemd[1]: tmp-crun.wMI6yU.mount: Deactivated successfully. Nov 28 05:03:46 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:03:46 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:03:46 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:03:46 localhost podman[312610]: 2025-11-28 10:03:46.650632633 +0000 UTC m=+0.062359043 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:03:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:03:47 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3438130315' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:03:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:03:47 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3438130315' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:03:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e137 e137: 6 total, 6 up, 6 in Nov 28 05:03:47 localhost nova_compute[280168]: 2025-11-28 10:03:47.138 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:47 localhost nova_compute[280168]: 2025-11-28 10:03:47.395 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:47 localhost nova_compute[280168]: 2025-11-28 10:03:47.507 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 117 KiB/s rd, 15 MiB/s wr, 161 op/s Nov 28 05:03:48 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:48.081 2 INFO neutron.agent.securitygroups_rpc [None req-d103e3e5-6a2c-4d52-97a2-9ed0e9f72fa6 595b5cbed3764c7a95b0ab3634e5becb 8c66e098e4fb4a349dc2bb4293454135 - - default default] Security group member updated ['f75eb612-183d-4664-84c2-1d15534e163f']#033[00m Nov 28 05:03:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e138 e138: 6 total, 6 up, 6 in Nov 28 05:03:48 localhost nova_compute[280168]: 2025-11-28 10:03:48.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:03:48 localhost neutron_sriov_agent[254415]: 2025-11-28 10:03:48.293 2 INFO neutron.agent.securitygroups_rpc [None req-8df05b65-915d-4be4-a7dc-9f54beb052e9 b286c38dfd0e4889806c62c7b4b9ee98 50a1392ce96c4024bcd36a3df403ca29 - - default default] Security group member updated ['b372bb98-860c-4571-936b-bf08ecbd647d']#033[00m Nov 28 05:03:48 localhost nova_compute[280168]: 2025-11-28 10:03:48.432 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 257 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 16 MiB/s wr, 101 op/s Nov 28 05:03:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:49.963 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:03:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e139 e139: 6 total, 6 up, 6 in Nov 28 05:03:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:50.847 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:03:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:50.848 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:03:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:03:50.848 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:03:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 131 KiB/s rd, 16 MiB/s wr, 185 op/s Nov 28 05:03:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:52 localhost nova_compute[280168]: 2025-11-28 10:03:52.397 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:52 localhost systemd[1]: virtsecretd.service: Deactivated successfully. Nov 28 05:03:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:03:52 localhost podman[312631]: 2025-11-28 10:03:52.56278715 +0000 UTC m=+0.081202719 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.33.7) Nov 28 05:03:52 localhost podman[312631]: 2025-11-28 10:03:52.580334818 +0000 UTC m=+0.098750357 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, distribution-scope=public) Nov 28 05:03:52 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:03:53 localhost nova_compute[280168]: 2025-11-28 10:03:53.478 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v224: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 5.0 KiB/s wr, 76 op/s Nov 28 05:03:55 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e140 e140: 6 total, 6 up, 6 in Nov 28 05:03:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v226: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 4.4 KiB/s wr, 67 op/s Nov 28 05:03:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:03:57 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e141 e141: 6 total, 6 up, 6 in Nov 28 05:03:57 localhost nova_compute[280168]: 2025-11-28 10:03:57.401 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:57 localhost openstack_network_exporter[240973]: ERROR 10:03:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:03:57 localhost openstack_network_exporter[240973]: ERROR 10:03:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:03:57 localhost openstack_network_exporter[240973]: ERROR 10:03:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:03:57 localhost openstack_network_exporter[240973]: Nov 28 05:03:57 localhost openstack_network_exporter[240973]: ERROR 10:03:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:03:57 localhost openstack_network_exporter[240973]: ERROR 10:03:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:03:57 localhost openstack_network_exporter[240973]: Nov 28 05:03:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 4.3 KiB/s wr, 70 op/s Nov 28 05:03:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e142 e142: 6 total, 6 up, 6 in Nov 28 05:03:58 localhost nova_compute[280168]: 2025-11-28 10:03:58.480 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:03:58 localhost podman[239012]: time="2025-11-28T10:03:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:03:58 localhost podman[239012]: @ - - [28/Nov/2025:10:03:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:03:58 localhost podman[239012]: @ - - [28/Nov/2025:10:03:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19197 "" "Go-http-client/1.1" Nov 28 05:03:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v230: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 5 op/s Nov 28 05:04:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:00.034 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:03:59Z, description=, device_id=d3e54e1b-bd42-4183-a535-29739c2a7728, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d74da118-60dc-4dc3-9894-6ab138032f68, ip_allocation=immediate, mac_address=fa:16:3e:6f:c6:cd, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1486, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:03:59Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e143 e143: 6 total, 6 up, 6 in Nov 28 05:04:00 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:04:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:00 localhost podman[312669]: 2025-11-28 10:04:00.2477203 +0000 UTC m=+0.044371911 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:04:00 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:00.484 261346 INFO neutron.agent.dhcp.agent [None req-9459765b-c1a8-41c2-959b-593c843623f6 - - - - - -] DHCP configuration for ports {'d74da118-60dc-4dc3-9894-6ab138032f68'} is completed#033[00m Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.626 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:04:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:04:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e144 e144: 6 total, 6 up, 6 in Nov 28 05:04:01 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:01.492 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:00Z, description=, device_id=2783a834-568f-4ded-99ee-ece5c2dbdd83, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=35238ed1-dbed-4cd4-89e7-df96fbeaaa2f, ip_allocation=immediate, mac_address=fa:16:3e:34:e6:75, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1490, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:01Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:01 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:04:01 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:01 localhost podman[312706]: 2025-11-28 10:04:01.716722993 +0000 UTC m=+0.064550079 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:04:01 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v233: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 4.3 KiB/s wr, 74 op/s Nov 28 05:04:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:01 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:01.927 261346 INFO neutron.agent.dhcp.agent [None req-e26cf6be-c645-4a00-83aa-f00344c570b6 - - - - - -] DHCP configuration for ports {'35238ed1-dbed-4cd4-89e7-df96fbeaaa2f'} is completed#033[00m Nov 28 05:04:02 localhost nova_compute[280168]: 2025-11-28 10:04:02.403 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:04:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:04:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:04:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:04:02 localhost systemd[1]: tmp-crun.EUjUqz.mount: Deactivated successfully. Nov 28 05:04:02 localhost podman[312727]: 2025-11-28 10:04:02.998705823 +0000 UTC m=+0.094520018 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:04:03 localhost podman[312726]: 2025-11-28 10:04:03.045440636 +0000 UTC m=+0.146538252 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:04:03 localhost podman[312727]: 2025-11-28 10:04:03.060530058 +0000 UTC m=+0.156344213 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Nov 28 05:04:03 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:04:03 localhost podman[312726]: 2025-11-28 10:04:03.082542363 +0000 UTC m=+0.183639969 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:04:03 localhost podman[312728]: 2025-11-28 10:04:03.103446864 +0000 UTC m=+0.195614836 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:04:03 localhost nova_compute[280168]: 2025-11-28 10:04:03.144 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:03 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:04:03 localhost podman[312735]: 2025-11-28 10:04:03.151838467 +0000 UTC m=+0.237924263 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:04:03 localhost podman[312735]: 2025-11-28 10:04:03.162451192 +0000 UTC m=+0.248536988 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:04:03 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:04:03 localhost podman[312728]: 2025-11-28 10:04:03.188094528 +0000 UTC m=+0.280262510 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 05:04:03 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:04:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:04:03 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2743022339' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:04:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:04:03 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2743022339' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:04:03 localhost nova_compute[280168]: 2025-11-28 10:04:03.481 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s Nov 28 05:04:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:04.765 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:03Z, description=, device_id=e1d6ee65-6055-40c3-8545-a5afe51532af, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c00eeda0-afa4-4e3d-8ef8-f30a2bfd8639, ip_allocation=immediate, mac_address=fa:16:3e:4f:12:57, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1503, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:04Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:04 localhost systemd[1]: tmp-crun.GrKOxL.mount: Deactivated successfully. Nov 28 05:04:05 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:05.066 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:05 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:05.069 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:05 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:05.071 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:05 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:05.072 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[a09484e7-5083-45b5-b603-7d1076454c2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:05 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:04:05 localhost podman[312825]: 2025-11-28 10:04:05.294995041 +0000 UTC m=+0.371985552 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:04:05 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:05 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e145 e145: 6 total, 6 up, 6 in Nov 28 05:04:05 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:05.537 261346 INFO neutron.agent.dhcp.agent [None req-ad746a3d-cc91-4303-885c-6820f784f0e3 - - - - - -] DHCP configuration for ports {'c00eeda0-afa4-4e3d-8ef8-f30a2bfd8639'} is completed#033[00m Nov 28 05:04:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:04:05 Nov 28 05:04:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:04:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:04:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['images', 'volumes', 'backups', '.mgr', 'vms', 'manila_data', 'manila_metadata'] Nov 28 05:04:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:04:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:04:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:04:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:04:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:04:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v236: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s Nov 28 05:04:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:04:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002721761294900428 quantized to 32 (current 32) Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:04:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:04:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:04:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:04:05 localhost podman[312845]: 2025-11-28 10:04:05.98899071 +0000 UTC m=+0.092235888 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:04:06 localhost podman[312845]: 2025-11-28 10:04:06.025589852 +0000 UTC m=+0.128834990 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:04:06 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:04:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:07 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:04:07 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:07 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:07 localhost podman[312884]: 2025-11-28 10:04:07.103245139 +0000 UTC m=+0.060943238 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:04:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e146 e146: 6 total, 6 up, 6 in Nov 28 05:04:07 localhost nova_compute[280168]: 2025-11-28 10:04:07.407 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:07 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:07.431 2 INFO neutron.agent.securitygroups_rpc [None req-f8d4b801-af07-4edf-8fd0-12384366c126 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 4.6 KiB/s wr, 103 op/s Nov 28 05:04:08 localhost nova_compute[280168]: 2025-11-28 10:04:08.485 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:09 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:09.211 2 INFO neutron.agent.securitygroups_rpc [None req-f248108c-11ce-43fd-804f-455d486d1048 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e147 e147: 6 total, 6 up, 6 in Nov 28 05:04:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 1.7 KiB/s wr, 55 op/s Nov 28 05:04:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:04:09 localhost podman[312906]: 2025-11-28 10:04:09.980619865 +0000 UTC m=+0.082727736 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 05:04:09 localhost podman[312906]: 2025-11-28 10:04:09.998993648 +0000 UTC m=+0.101101519 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:04:10 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:04:10 localhost nova_compute[280168]: 2025-11-28 10:04:10.182 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:10 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:04:10 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:10 localhost podman[312941]: 2025-11-28 10:04:10.682175417 +0000 UTC m=+0.057534105 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:04:10 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 3.6 KiB/s wr, 106 op/s Nov 28 05:04:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:12 localhost nova_compute[280168]: 2025-11-28 10:04:12.411 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:04:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2186460897' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:04:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:04:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2186460897' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:04:13 localhost nova_compute[280168]: 2025-11-28 10:04:13.514 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 2.9 KiB/s wr, 85 op/s Nov 28 05:04:13 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:13.946 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 2001:db8::f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:13 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:13.949 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:13 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:13.951 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:13 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:13.952 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[ac807c90-3bbd-4357-8875-0873d0af70bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:14 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:14.149 261346 INFO neutron.agent.linux.ip_lib [None req-777cdef2-7418-4a12-a456-c55ffec90362 - - - - - -] Device tap31ecb95a-12 cannot be used as it has no MAC address#033[00m Nov 28 05:04:14 localhost nova_compute[280168]: 2025-11-28 10:04:14.170 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:14 localhost kernel: device tap31ecb95a-12 entered promiscuous mode Nov 28 05:04:14 localhost ovn_controller[152726]: 2025-11-28T10:04:14Z|00091|binding|INFO|Claiming lport 31ecb95a-127f-4dbe-a0de-1dce5207aadf for this chassis. Nov 28 05:04:14 localhost ovn_controller[152726]: 2025-11-28T10:04:14Z|00092|binding|INFO|31ecb95a-127f-4dbe-a0de-1dce5207aadf: Claiming unknown Nov 28 05:04:14 localhost nova_compute[280168]: 2025-11-28 10:04:14.177 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:14 localhost NetworkManager[5965]: [1764324254.1784] manager: (tap31ecb95a-12): new Generic device (/org/freedesktop/NetworkManager/Devices/24) Nov 28 05:04:14 localhost systemd-udevd[312972]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:04:14 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:14.198 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-3b7330b6-a04d-491d-86c4-bd4c5d42920c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b7330b6-a04d-491d-86c4-bd4c5d42920c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c66e098e4fb4a349dc2bb4293454135', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4e44b4-7eb2-420d-aa93-9cf50a2ed56e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=31ecb95a-127f-4dbe-a0de-1dce5207aadf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:14 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:14.200 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 31ecb95a-127f-4dbe-a0de-1dce5207aadf in datapath 3b7330b6-a04d-491d-86c4-bd4c5d42920c bound to our chassis#033[00m Nov 28 05:04:14 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:14.202 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 3b7330b6-a04d-491d-86c4-bd4c5d42920c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:04:14 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:14.203 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[de1e1271-5989-49ea-840a-a572d1d40447]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost ovn_controller[152726]: 2025-11-28T10:04:14Z|00093|binding|INFO|Setting lport 31ecb95a-127f-4dbe-a0de-1dce5207aadf ovn-installed in OVS Nov 28 05:04:14 localhost ovn_controller[152726]: 2025-11-28T10:04:14Z|00094|binding|INFO|Setting lport 31ecb95a-127f-4dbe-a0de-1dce5207aadf up in Southbound Nov 28 05:04:14 localhost nova_compute[280168]: 2025-11-28 10:04:14.215 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost journal[228057]: ethtool ioctl error on tap31ecb95a-12: No such device Nov 28 05:04:14 localhost nova_compute[280168]: 2025-11-28 10:04:14.250 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:14 localhost nova_compute[280168]: 2025-11-28 10:04:14.282 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:14 localhost nova_compute[280168]: 2025-11-28 10:04:14.846 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:14 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:04:14 localhost podman[313039]: 2025-11-28 10:04:14.876665201 +0000 UTC m=+0.059855725 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:04:14 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:14 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 e148: 6 total, 6 up, 6 in Nov 28 05:04:15 localhost podman[313081]: Nov 28 05:04:15 localhost podman[313081]: 2025-11-28 10:04:15.237406777 +0000 UTC m=+0.089010969 container create a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:04:15 localhost systemd[1]: Started libpod-conmon-a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b.scope. Nov 28 05:04:15 localhost podman[313081]: 2025-11-28 10:04:15.19345305 +0000 UTC m=+0.045057262 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:04:15 localhost systemd[1]: Started libcrun container. Nov 28 05:04:15 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e14a9eee3790940011a5f6fec7b205b35f5a18bfb035012dcc3d555698e72f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:04:15 localhost podman[313081]: 2025-11-28 10:04:15.316402238 +0000 UTC m=+0.168006420 container init a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:04:15 localhost podman[313081]: 2025-11-28 10:04:15.325837828 +0000 UTC m=+0.177441990 container start a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:04:15 localhost dnsmasq[313099]: started, version 2.85 cachesize 150 Nov 28 05:04:15 localhost dnsmasq[313099]: DNS service limited to local subnets Nov 28 05:04:15 localhost dnsmasq[313099]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:04:15 localhost dnsmasq[313099]: warning: no upstream servers configured Nov 28 05:04:15 localhost dnsmasq-dhcp[313099]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:04:15 localhost dnsmasq[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/addn_hosts - 0 addresses Nov 28 05:04:15 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/host Nov 28 05:04:15 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/opts Nov 28 05:04:15 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:15.500 261346 INFO neutron.agent.dhcp.agent [None req-6dde239a-04da-4ef2-9edd-b49e404f417d - - - - - -] DHCP configuration for ports {'25b5af18-6a6e-4029-a01f-7975d7d01d4f'} is completed#033[00m Nov 28 05:04:15 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:15.719 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:15Z, description=, device_id=a11f5dcb-dffb-46ad-be99-9a47466c1a1b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a4c2e21e-851b-46c3-b929-7611c9033400, ip_allocation=immediate, mac_address=fa:16:3e:7e:eb:ef, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1571, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:15Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 43 op/s Nov 28 05:04:15 localhost podman[313116]: 2025-11-28 10:04:15.935957447 +0000 UTC m=+0.052846961 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 28 05:04:15 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:04:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:16 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:16.170 261346 INFO neutron.agent.dhcp.agent [None req-31d574d1-422c-4246-9779-02f1ccf2b076 - - - - - -] DHCP configuration for ports {'a4c2e21e-851b-46c3-b929-7611c9033400'} is completed#033[00m Nov 28 05:04:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:17 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:17.268 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:17 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:17.270 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:17 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:17.272 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:17 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:17.273 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[e9da2519-8502-496e-a1f7-cbdc569c6136]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:17.403 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:17Z, description=, device_id=a16605f7-3b57-4ca7-98cd-dfd3dddf9b38, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=55f31bf4-45be-4c85-bb79-87c25ca80e96, ip_allocation=immediate, mac_address=fa:16:3e:5d:96:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:04:10Z, description=, dns_domain=, id=3b7330b6-a04d-491d-86c4-bd4c5d42920c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-2131176874, port_security_enabled=True, project_id=8c66e098e4fb4a349dc2bb4293454135, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9850, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1534, status=ACTIVE, subnets=['4ebd8429-0bf9-4a3b-9202-8b21c367bbb8'], tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:12Z, vlan_transparent=None, network_id=3b7330b6-a04d-491d-86c4-bd4c5d42920c, port_security_enabled=False, project_id=8c66e098e4fb4a349dc2bb4293454135, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1578, status=DOWN, tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:17Z on network 3b7330b6-a04d-491d-86c4-bd4c5d42920c#033[00m Nov 28 05:04:17 localhost nova_compute[280168]: 2025-11-28 10:04:17.414 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:17 localhost dnsmasq[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/addn_hosts - 1 addresses Nov 28 05:04:17 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/host Nov 28 05:04:17 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/opts Nov 28 05:04:17 localhost podman[313152]: 2025-11-28 10:04:17.629678137 +0000 UTC m=+0.059218526 container kill a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:04:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s Nov 28 05:04:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:17.855 261346 INFO neutron.agent.dhcp.agent [None req-88aab17e-875d-47cc-8987-2d6a9cb3f633 - - - - - -] DHCP configuration for ports {'55f31bf4-45be-4c85-bb79-87c25ca80e96'} is completed#033[00m Nov 28 05:04:18 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:18.318 2 INFO neutron.agent.securitygroups_rpc [None req-5939fc78-1573-4689-b1d7-9426dbeeb10b 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:18 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:18.431 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:17Z, description=, device_id=a16605f7-3b57-4ca7-98cd-dfd3dddf9b38, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=55f31bf4-45be-4c85-bb79-87c25ca80e96, ip_allocation=immediate, mac_address=fa:16:3e:5d:96:ee, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:04:10Z, description=, dns_domain=, id=3b7330b6-a04d-491d-86c4-bd4c5d42920c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-2131176874, port_security_enabled=True, project_id=8c66e098e4fb4a349dc2bb4293454135, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=9850, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1534, status=ACTIVE, subnets=['4ebd8429-0bf9-4a3b-9202-8b21c367bbb8'], tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:12Z, vlan_transparent=None, network_id=3b7330b6-a04d-491d-86c4-bd4c5d42920c, port_security_enabled=False, project_id=8c66e098e4fb4a349dc2bb4293454135, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1578, status=DOWN, tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:17Z on network 3b7330b6-a04d-491d-86c4-bd4c5d42920c#033[00m Nov 28 05:04:18 localhost nova_compute[280168]: 2025-11-28 10:04:18.517 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:18 localhost dnsmasq[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/addn_hosts - 1 addresses Nov 28 05:04:18 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/host Nov 28 05:04:18 localhost podman[313188]: 2025-11-28 10:04:18.765717513 +0000 UTC m=+0.064140226 container kill a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:04:18 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/opts Nov 28 05:04:19 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:19.020 261346 INFO neutron.agent.dhcp.agent [None req-cd745ce8-7227-41a9-8592-32d9d2cb63e1 - - - - - -] DHCP configuration for ports {'55f31bf4-45be-4c85-bb79-87c25ca80e96'} is completed#033[00m Nov 28 05:04:19 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:19.194 2 INFO neutron.agent.securitygroups_rpc [None req-582a65ec-d5d3-451f-981d-4b6bb2c1b94e 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.3 KiB/s wr, 35 op/s Nov 28 05:04:20 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:20.891 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:20 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:20.893 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:20 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:20.895 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:20 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:20.896 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[5f91a866-546e-482b-9c29-63bfafd861e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:04:20 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:04:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:04:20 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:04:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:04:20 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 1b5136cd-8db2-4cef-82dc-2b15d47f84f4 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:04:20 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 1b5136cd-8db2-4cef-82dc-2b15d47f84f4 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:04:20 localhost ceph-mgr[286188]: [progress INFO root] Completed event 1b5136cd-8db2-4cef-82dc-2b15d47f84f4 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:04:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:04:21 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:04:21 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:04:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:04:21 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:21.275 2 INFO neutron.agent.securitygroups_rpc [None req-2bb5c88d-1a1c-4245-b22f-59b37a9a0aaf 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:04:21 localhost dnsmasq[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/addn_hosts - 0 addresses Nov 28 05:04:21 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/host Nov 28 05:04:21 localhost systemd[1]: tmp-crun.Lav7sM.mount: Deactivated successfully. Nov 28 05:04:21 localhost dnsmasq-dhcp[313099]: read /var/lib/neutron/dhcp/3b7330b6-a04d-491d-86c4-bd4c5d42920c/opts Nov 28 05:04:21 localhost podman[313311]: 2025-11-28 10:04:21.352588407 +0000 UTC m=+0.065013654 container kill a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:04:21 localhost ovn_controller[152726]: 2025-11-28T10:04:21Z|00095|binding|INFO|Releasing lport 31ecb95a-127f-4dbe-a0de-1dce5207aadf from this chassis (sb_readonly=0) Nov 28 05:04:21 localhost kernel: device tap31ecb95a-12 left promiscuous mode Nov 28 05:04:21 localhost nova_compute[280168]: 2025-11-28 10:04:21.517 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:21 localhost ovn_controller[152726]: 2025-11-28T10:04:21Z|00096|binding|INFO|Setting lport 31ecb95a-127f-4dbe-a0de-1dce5207aadf down in Southbound Nov 28 05:04:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:21.527 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-3b7330b6-a04d-491d-86c4-bd4c5d42920c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3b7330b6-a04d-491d-86c4-bd4c5d42920c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c66e098e4fb4a349dc2bb4293454135', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a4e44b4-7eb2-420d-aa93-9cf50a2ed56e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=31ecb95a-127f-4dbe-a0de-1dce5207aadf) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:21.529 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 31ecb95a-127f-4dbe-a0de-1dce5207aadf in datapath 3b7330b6-a04d-491d-86c4-bd4c5d42920c unbound from our chassis#033[00m Nov 28 05:04:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:21.531 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3b7330b6-a04d-491d-86c4-bd4c5d42920c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:21.532 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[616eb776-6531-4bff-96ca-98a6f31155aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:21 localhost nova_compute[280168]: 2025-11-28 10:04:21.548 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:22 localhost nova_compute[280168]: 2025-11-28 10:04:22.416 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:22 localhost dnsmasq[313099]: exiting on receipt of SIGTERM Nov 28 05:04:22 localhost podman[313350]: 2025-11-28 10:04:22.459201532 +0000 UTC m=+0.071047308 container kill a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:04:22 localhost systemd[1]: libpod-a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b.scope: Deactivated successfully. Nov 28 05:04:22 localhost podman[313362]: 2025-11-28 10:04:22.524310918 +0000 UTC m=+0.053916124 container died a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:04:22 localhost podman[313362]: 2025-11-28 10:04:22.559749814 +0000 UTC m=+0.089354910 container cleanup a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:04:22 localhost systemd[1]: libpod-conmon-a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b.scope: Deactivated successfully. Nov 28 05:04:22 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:22.619 2 INFO neutron.agent.securitygroups_rpc [None req-038c15bb-00b9-42a6-bcc5-a72acb379335 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:04:22 localhost podman[313369]: 2025-11-28 10:04:22.620387572 +0000 UTC m=+0.135543615 container remove a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3b7330b6-a04d-491d-86c4-bd4c5d42920c, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:04:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:22.666 261346 INFO neutron.agent.dhcp.agent [None req-8c614c6e-ad9e-421f-861d-62c291826668 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:04:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:22.667 261346 INFO neutron.agent.dhcp.agent [None req-8c614c6e-ad9e-421f-861d-62c291826668 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:04:22 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:04:22 localhost nova_compute[280168]: 2025-11-28 10:04:22.979 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:22 localhost podman[313391]: 2025-11-28 10:04:22.982571333 +0000 UTC m=+0.089428252 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41) Nov 28 05:04:23 localhost podman[313391]: 2025-11-28 10:04:23.004497575 +0000 UTC m=+0.111354464 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm) Nov 28 05:04:23 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:04:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:23.086 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:23.088 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:23.090 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:23.090 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5618c0-3b02-4734-844f-deae7f75386f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:23 localhost systemd[1]: var-lib-containers-storage-overlay-55e14a9eee3790940011a5f6fec7b205b35f5a18bfb035012dcc3d555698e72f-merged.mount: Deactivated successfully. Nov 28 05:04:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2fba2f1895c02415423b3a2d1514bc7c1d3e2d21a355465422aa21680a66b5b-userdata-shm.mount: Deactivated successfully. Nov 28 05:04:23 localhost systemd[1]: run-netns-qdhcp\x2d3b7330b6\x2da04d\x2d491d\x2d86c4\x2dbd4c5d42920c.mount: Deactivated successfully. Nov 28 05:04:23 localhost nova_compute[280168]: 2025-11-28 10:04:23.520 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:24 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:24.859 2 INFO neutron.agent.securitygroups_rpc [None req-5be6ae81-d286-424b-afd6-2b6865c77664 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:25 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:04:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:04:26 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:26.169 2 INFO neutron.agent.securitygroups_rpc [None req-f516662f-fbee-4914-9fae-83fcd2f7d639 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:26 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:04:27 localhost nova_compute[280168]: 2025-11-28 10:04:27.419 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:27 localhost openstack_network_exporter[240973]: ERROR 10:04:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:04:27 localhost openstack_network_exporter[240973]: ERROR 10:04:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:04:27 localhost openstack_network_exporter[240973]: ERROR 10:04:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:04:27 localhost openstack_network_exporter[240973]: ERROR 10:04:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:04:27 localhost openstack_network_exporter[240973]: Nov 28 05:04:27 localhost openstack_network_exporter[240973]: ERROR 10:04:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:04:27 localhost openstack_network_exporter[240973]: Nov 28 05:04:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:28 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:28.467 2 INFO neutron.agent.securitygroups_rpc [None req-35c0af25-6cf6-4373-be02-f6ff138ff337 e7c9d49fbf5f41059b8d32426e1740a6 517e4cc7e34e4fe7b49313300e5db635 - - default default] Security group member updated ['d7bc25b0-2d77-4fba-a003-707609b573d6']#033[00m Nov 28 05:04:28 localhost nova_compute[280168]: 2025-11-28 10:04:28.548 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:28 localhost podman[239012]: time="2025-11-28T10:04:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:04:28 localhost podman[239012]: @ - - [28/Nov/2025:10:04:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:04:28 localhost podman[239012]: @ - - [28/Nov/2025:10:04:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19199 "" "Go-http-client/1.1" Nov 28 05:04:29 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:29.317 2 INFO neutron.agent.securitygroups_rpc [None req-535fb80e-1678-409f-9e3d-b2eaa82a20b5 e7c9d49fbf5f41059b8d32426e1740a6 517e4cc7e34e4fe7b49313300e5db635 - - default default] Security group member updated ['d7bc25b0-2d77-4fba-a003-707609b573d6']#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.610 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.611 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.614 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.615 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[0e921f9a-6bf2-4a4c-9ba0-70834c11d331]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:29 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:29.916 261346 INFO neutron.agent.linux.ip_lib [None req-a547b6d6-8a29-4867-b7b0-701bdc0462de - - - - - -] Device tap48dc601b-9d cannot be used as it has no MAC address#033[00m Nov 28 05:04:29 localhost nova_compute[280168]: 2025-11-28 10:04:29.940 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:29 localhost kernel: device tap48dc601b-9d entered promiscuous mode Nov 28 05:04:29 localhost ovn_controller[152726]: 2025-11-28T10:04:29Z|00097|binding|INFO|Claiming lport 48dc601b-9dc3-45c9-9b98-4c07536959fd for this chassis. Nov 28 05:04:29 localhost NetworkManager[5965]: [1764324269.9498] manager: (tap48dc601b-9d): new Generic device (/org/freedesktop/NetworkManager/Devices/25) Nov 28 05:04:29 localhost nova_compute[280168]: 2025-11-28 10:04:29.949 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:29 localhost ovn_controller[152726]: 2025-11-28T10:04:29Z|00098|binding|INFO|48dc601b-9dc3-45c9-9b98-4c07536959fd: Claiming unknown Nov 28 05:04:29 localhost systemd-udevd[313424]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.960 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-d81fff26-c58f-4d58-a4c3-379fa25c0b56', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d81fff26-c58f-4d58-a4c3-379fa25c0b56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c66e098e4fb4a349dc2bb4293454135', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c8780a7-df1b-4611-af44-f100aaf1ce7e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=48dc601b-9dc3-45c9-9b98-4c07536959fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.962 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 48dc601b-9dc3-45c9-9b98-4c07536959fd in datapath d81fff26-c58f-4d58-a4c3-379fa25c0b56 bound to our chassis#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.963 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d81fff26-c58f-4d58-a4c3-379fa25c0b56 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:04:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:29.964 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[b1e8850e-d148-473e-8eb6-15ccd6d7a09c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:29 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:29 localhost ovn_controller[152726]: 2025-11-28T10:04:29Z|00099|binding|INFO|Setting lport 48dc601b-9dc3-45c9-9b98-4c07536959fd ovn-installed in OVS Nov 28 05:04:29 localhost ovn_controller[152726]: 2025-11-28T10:04:29Z|00100|binding|INFO|Setting lport 48dc601b-9dc3-45c9-9b98-4c07536959fd up in Southbound Nov 28 05:04:29 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:29 localhost nova_compute[280168]: 2025-11-28 10:04:29.992 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:29 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:30 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:30 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:30 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:30 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:30 localhost journal[228057]: ethtool ioctl error on tap48dc601b-9d: No such device Nov 28 05:04:30 localhost nova_compute[280168]: 2025-11-28 10:04:30.029 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:30 localhost nova_compute[280168]: 2025-11-28 10:04:30.061 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:30 localhost podman[313495]: Nov 28 05:04:30 localhost podman[313495]: 2025-11-28 10:04:30.926876532 +0000 UTC m=+0.078280930 container create db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:04:30 localhost systemd[1]: Started libpod-conmon-db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24.scope. Nov 28 05:04:30 localhost podman[313495]: 2025-11-28 10:04:30.883287216 +0000 UTC m=+0.034691614 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:04:30 localhost systemd[1]: tmp-crun.eOnXfx.mount: Deactivated successfully. Nov 28 05:04:30 localhost systemd[1]: Started libcrun container. Nov 28 05:04:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54a4e230fd4a13b47ad5b4e150539a4b42d7ec5ed223aa986059699bf0cd00cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:04:31 localhost podman[313495]: 2025-11-28 10:04:31.003588163 +0000 UTC m=+0.154992551 container init db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:04:31 localhost podman[313495]: 2025-11-28 10:04:31.010556306 +0000 UTC m=+0.161960694 container start db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:04:31 localhost dnsmasq[313513]: started, version 2.85 cachesize 150 Nov 28 05:04:31 localhost dnsmasq[313513]: DNS service limited to local subnets Nov 28 05:04:31 localhost dnsmasq[313513]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:04:31 localhost dnsmasq[313513]: warning: no upstream servers configured Nov 28 05:04:31 localhost dnsmasq-dhcp[313513]: DHCP, static leases only on 10.101.0.0, lease time 1d Nov 28 05:04:31 localhost dnsmasq[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/addn_hosts - 0 addresses Nov 28 05:04:31 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/host Nov 28 05:04:31 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/opts Nov 28 05:04:31 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:31.153 261346 INFO neutron.agent.dhcp.agent [None req-2ace9c18-2446-4f1e-a298-37ba949f937e - - - - - -] DHCP configuration for ports {'a2ca468b-152f-4318-90e7-ce780e265076'} is completed#033[00m Nov 28 05:04:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 28 05:04:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:32 localhost nova_compute[280168]: 2025-11-28 10:04:32.422 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:32 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:32.901 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:31Z, description=, device_id=65a5fc61-8378-41fb-8a6b-788254c76348, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8d7574a9-ec3d-435a-81c6-c889e68005e5, ip_allocation=immediate, mac_address=fa:16:3e:3c:60:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:04:27Z, description=, dns_domain=, id=d81fff26-c58f-4d58-a4c3-379fa25c0b56, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1315557878, port_security_enabled=True, project_id=8c66e098e4fb4a349dc2bb4293454135, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16174, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1641, status=ACTIVE, subnets=['1ede51d8-16b3-470f-a7f1-54260e168ed4'], tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:28Z, vlan_transparent=None, network_id=d81fff26-c58f-4d58-a4c3-379fa25c0b56, port_security_enabled=False, project_id=8c66e098e4fb4a349dc2bb4293454135, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1666, status=DOWN, tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:32Z on network d81fff26-c58f-4d58-a4c3-379fa25c0b56#033[00m Nov 28 05:04:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:33.055 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:33.057 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:33.061 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:33 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:33.062 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[7baaed11-b6a6-4a18-ae03-b8bcba1964e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:33 localhost dnsmasq[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/addn_hosts - 1 addresses Nov 28 05:04:33 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/host Nov 28 05:04:33 localhost podman[313532]: 2025-11-28 10:04:33.123881026 +0000 UTC m=+0.056305176 container kill db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:04:33 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/opts Nov 28 05:04:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:04:33 localhost systemd[1]: tmp-crun.ulCl0w.mount: Deactivated successfully. Nov 28 05:04:33 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:33.175 2 INFO neutron.agent.securitygroups_rpc [None req-e7bb8635-ea1b-4f3c-951e-d95d847ad39e 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:04:33 localhost podman[313547]: 2025-11-28 10:04:33.250686502 +0000 UTC m=+0.098563891 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3) Nov 28 05:04:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:04:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:04:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:04:33 localhost podman[313572]: 2025-11-28 10:04:33.361696265 +0000 UTC m=+0.088385951 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:04:33 localhost podman[313547]: 2025-11-28 10:04:33.365354287 +0000 UTC m=+0.213231656 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:04:33 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:04:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:33.412 261346 INFO neutron.agent.dhcp.agent [None req-8536f9ea-c6b8-4d24-a7fc-eb3b957f7c1a - - - - - -] DHCP configuration for ports {'8d7574a9-ec3d-435a-81c6-c889e68005e5'} is completed#033[00m Nov 28 05:04:33 localhost podman[313576]: 2025-11-28 10:04:33.420012782 +0000 UTC m=+0.142169248 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:04:33 localhost podman[313576]: 2025-11-28 10:04:33.425365215 +0000 UTC m=+0.147521671 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:04:33 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:04:33 localhost podman[313574]: 2025-11-28 10:04:33.339568137 +0000 UTC m=+0.064282192 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 05:04:33 localhost podman[313572]: 2025-11-28 10:04:33.450126585 +0000 UTC m=+0.176816321 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 05:04:33 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:04:33 localhost podman[313574]: 2025-11-28 10:04:33.471350225 +0000 UTC m=+0.196064210 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:04:33 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:04:33 localhost nova_compute[280168]: 2025-11-28 10:04:33.550 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 28 05:04:34 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:34.528 2 INFO neutron.agent.securitygroups_rpc [None req-406cfd0d-88dd-4d36-9649-40665b36b8d2 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:35.340 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:31Z, description=, device_id=65a5fc61-8378-41fb-8a6b-788254c76348, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8d7574a9-ec3d-435a-81c6-c889e68005e5, ip_allocation=immediate, mac_address=fa:16:3e:3c:60:f5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:04:27Z, description=, dns_domain=, id=d81fff26-c58f-4d58-a4c3-379fa25c0b56, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1315557878, port_security_enabled=True, project_id=8c66e098e4fb4a349dc2bb4293454135, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16174, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1641, status=ACTIVE, subnets=['1ede51d8-16b3-470f-a7f1-54260e168ed4'], tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:28Z, vlan_transparent=None, network_id=d81fff26-c58f-4d58-a4c3-379fa25c0b56, port_security_enabled=False, project_id=8c66e098e4fb4a349dc2bb4293454135, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1666, status=DOWN, tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:32Z on network d81fff26-c58f-4d58-a4c3-379fa25c0b56#033[00m Nov 28 05:04:35 localhost podman[313653]: 2025-11-28 10:04:35.563256199 +0000 UTC m=+0.071969247 container kill db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:04:35 localhost dnsmasq[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/addn_hosts - 1 addresses Nov 28 05:04:35 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/host Nov 28 05:04:35 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/opts Nov 28 05:04:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:04:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:04:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:04:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:04:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 28 05:04:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:04:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:04:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:35.878 261346 INFO neutron.agent.dhcp.agent [None req-db1f7d68-a318-43ec-9aed-8d8bd4c4ca23 - - - - - -] DHCP configuration for ports {'8d7574a9-ec3d-435a-81c6-c889e68005e5'} is completed#033[00m Nov 28 05:04:35 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:35.899 2 INFO neutron.agent.securitygroups_rpc [None req-e5e44e33-1445-4a9a-aa0f-e3e5f136c603 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:36 localhost nova_compute[280168]: 2025-11-28 10:04:36.024 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:36 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:36.675 2 INFO neutron.agent.securitygroups_rpc [None req-260c5785-8a89-47b1-924c-143d50af86e5 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:04:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:04:36 localhost podman[313675]: 2025-11-28 10:04:36.967597129 +0000 UTC m=+0.074603088 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:04:36 localhost podman[313675]: 2025-11-28 10:04:36.982444063 +0000 UTC m=+0.089449992 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:04:36 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:04:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:37.283 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:37Z, description=, device_id=996ff932-5706-4ef7-9ffc-88689082f05e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=05a4d4ad-a16f-428e-8f1a-957893a76c09, ip_allocation=immediate, mac_address=fa:16:3e:f8:dd:39, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1713, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:37Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:37 localhost nova_compute[280168]: 2025-11-28 10:04:37.425 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:37 localhost systemd[1]: tmp-crun.PWIq15.mount: Deactivated successfully. Nov 28 05:04:37 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:04:37 localhost podman[313715]: 2025-11-28 10:04:37.520217145 +0000 UTC m=+0.076615089 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:04:37 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:37 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 28 05:04:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:37.795 261346 INFO neutron.agent.dhcp.agent [None req-4f4dae4a-22fc-4aac-b2fa-f71365457492 - - - - - -] DHCP configuration for ports {'05a4d4ad-a16f-428e-8f1a-957893a76c09'} is completed#033[00m Nov 28 05:04:38 localhost nova_compute[280168]: 2025-11-28 10:04:38.596 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:38 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:38.899 261346 INFO neutron.agent.linux.ip_lib [None req-aa8a08ff-b3b7-4c94-aa7b-45efc7bbc092 - - - - - -] Device tap0dd1eafc-23 cannot be used as it has no MAC address#033[00m Nov 28 05:04:38 localhost nova_compute[280168]: 2025-11-28 10:04:38.925 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:38 localhost kernel: device tap0dd1eafc-23 entered promiscuous mode Nov 28 05:04:38 localhost NetworkManager[5965]: [1764324278.9335] manager: (tap0dd1eafc-23): new Generic device (/org/freedesktop/NetworkManager/Devices/26) Nov 28 05:04:38 localhost ovn_controller[152726]: 2025-11-28T10:04:38Z|00101|binding|INFO|Claiming lport 0dd1eafc-23dc-4156-9d9f-2142e93fc855 for this chassis. Nov 28 05:04:38 localhost nova_compute[280168]: 2025-11-28 10:04:38.933 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:38 localhost ovn_controller[152726]: 2025-11-28T10:04:38Z|00102|binding|INFO|0dd1eafc-23dc-4156-9d9f-2142e93fc855: Claiming unknown Nov 28 05:04:38 localhost systemd-udevd[313746]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:04:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:38.958 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-857a46da-ae2c-48a0-8bc8-b100174874d8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-857a46da-ae2c-48a0-8bc8-b100174874d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c66e098e4fb4a349dc2bb4293454135', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6731419-dcc4-4fe4-a3f7-f974e0596a7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0dd1eafc-23dc-4156-9d9f-2142e93fc855) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:38.961 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 0dd1eafc-23dc-4156-9d9f-2142e93fc855 in datapath 857a46da-ae2c-48a0-8bc8-b100174874d8 bound to our chassis#033[00m Nov 28 05:04:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:38.963 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 857a46da-ae2c-48a0-8bc8-b100174874d8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:38.965 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6b2330-5104-4a07-a1cf-89fe682852bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:38 localhost ovn_controller[152726]: 2025-11-28T10:04:38Z|00103|binding|INFO|Setting lport 0dd1eafc-23dc-4156-9d9f-2142e93fc855 ovn-installed in OVS Nov 28 05:04:38 localhost ovn_controller[152726]: 2025-11-28T10:04:38Z|00104|binding|INFO|Setting lport 0dd1eafc-23dc-4156-9d9f-2142e93fc855 up in Southbound Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:38 localhost nova_compute[280168]: 2025-11-28 10:04:38.969 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:38 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:39 localhost journal[228057]: ethtool ioctl error on tap0dd1eafc-23: No such device Nov 28 05:04:39 localhost nova_compute[280168]: 2025-11-28 10:04:39.013 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:39 localhost nova_compute[280168]: 2025-11-28 10:04:39.040 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:39 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:39.540 2 INFO neutron.agent.securitygroups_rpc [None req-cb662c54-c616-44d2-81c7-5ec9bf360652 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:04:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 28 05:04:39 localhost podman[313818]: Nov 28 05:04:39 localhost podman[313818]: 2025-11-28 10:04:39.891948295 +0000 UTC m=+0.098566792 container create 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:04:39 localhost systemd[1]: Started libpod-conmon-7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618.scope. Nov 28 05:04:39 localhost podman[313818]: 2025-11-28 10:04:39.845719117 +0000 UTC m=+0.052337644 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:04:39 localhost systemd[1]: Started libcrun container. Nov 28 05:04:39 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33481160fabcc385d997dc82bdb19e1c1a0e4fb16542a4ec52812ba2f61cc168/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:04:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:40.003 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 2001:db8::f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 10.100.0.2 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:40.006 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:04:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:40.010 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:04:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:40.011 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[4bbcb7a2-0a60-4393-ba70-e8013dd6fa77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:04:40 localhost podman[313818]: 2025-11-28 10:04:40.028289333 +0000 UTC m=+0.234907830 container init 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 05:04:40 localhost podman[313818]: 2025-11-28 10:04:40.04124381 +0000 UTC m=+0.247862307 container start 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:04:40 localhost dnsmasq[313863]: started, version 2.85 cachesize 150 Nov 28 05:04:40 localhost dnsmasq[313863]: DNS service limited to local subnets Nov 28 05:04:40 localhost dnsmasq[313863]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:04:40 localhost dnsmasq[313863]: warning: no upstream servers configured Nov 28 05:04:40 localhost dnsmasq-dhcp[313863]: DHCP, static leases only on 10.102.0.0, lease time 1d Nov 28 05:04:40 localhost dnsmasq[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/addn_hosts - 0 addresses Nov 28 05:04:40 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/host Nov 28 05:04:40 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/opts Nov 28 05:04:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:04:40 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:04:40 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:40 localhost podman[313852]: 2025-11-28 10:04:40.082979259 +0000 UTC m=+0.073038779 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:04:40 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:40 localhost podman[313864]: 2025-11-28 10:04:40.175456324 +0000 UTC m=+0.110436996 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:04:40 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:40.181 261346 INFO neutron.agent.dhcp.agent [None req-df0315a2-b00a-4270-9961-99c10f4b5cf1 - - - - - -] DHCP configuration for ports {'56d719ab-3654-408e-baed-b0b1d788646e'} is completed#033[00m Nov 28 05:04:40 localhost podman[313864]: 2025-11-28 10:04:40.21549345 +0000 UTC m=+0.150474082 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:04:40 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:04:40 localhost nova_compute[280168]: 2025-11-28 10:04:40.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:40 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:40.908 2 INFO neutron.agent.securitygroups_rpc [None req-019d9e71-83ce-440c-bb58-c6c7df87e29f 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:41 localhost nova_compute[280168]: 2025-11-28 10:04:41.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:41 localhost nova_compute[280168]: 2025-11-28 10:04:41.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:41 localhost nova_compute[280168]: 2025-11-28 10:04:41.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:41 localhost nova_compute[280168]: 2025-11-28 10:04:41.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:04:41 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:41.396 2 INFO neutron.agent.securitygroups_rpc [None req-bb6bd82f-3718-4e2c-b707-6af77a5385f7 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s Nov 28 05:04:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:42 localhost nova_compute[280168]: 2025-11-28 10:04:42.452 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:42 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:42.453 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:41Z, description=, device_id=e8aba12c-c649-4ffe-b451-637a93bb0e29, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e30196bb-cb63-4cd5-aac4-f2363fa006be, ip_allocation=immediate, mac_address=fa:16:3e:65:74:c6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1747, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:41Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:42 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:04:42 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:42 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:42 localhost podman[313907]: 2025-11-28 10:04:42.692677563 +0000 UTC m=+0.068671366 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:04:42 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:42.719 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:41Z, description=, device_id=65a5fc61-8378-41fb-8a6b-788254c76348, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=aba80ccc-f5c0-4dbd-b771-fb9b4d07071b, ip_allocation=immediate, mac_address=fa:16:3e:65:bc:e9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:04:36Z, description=, dns_domain=, id=857a46da-ae2c-48a0-8bc8-b100174874d8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1918864759, port_security_enabled=True, project_id=8c66e098e4fb4a349dc2bb4293454135, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1538, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1706, status=ACTIVE, subnets=['f694aabc-d0eb-4a4e-a738-daa5c1e20a97'], tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:37Z, vlan_transparent=None, network_id=857a46da-ae2c-48a0-8bc8-b100174874d8, port_security_enabled=False, project_id=8c66e098e4fb4a349dc2bb4293454135, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1753, status=DOWN, tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:42Z on network 857a46da-ae2c-48a0-8bc8-b100174874d8#033[00m Nov 28 05:04:42 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:42.950 261346 INFO neutron.agent.dhcp.agent [None req-e30d9dfa-29fa-48ca-887c-40f4421d6216 - - - - - -] DHCP configuration for ports {'e30196bb-cb63-4cd5-aac4-f2363fa006be'} is completed#033[00m Nov 28 05:04:43 localhost dnsmasq[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/addn_hosts - 1 addresses Nov 28 05:04:43 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/host Nov 28 05:04:43 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/opts Nov 28 05:04:43 localhost podman[313944]: 2025-11-28 10:04:43.053629195 +0000 UTC m=+0.067107538 container kill 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:04:43 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:43.322 261346 INFO neutron.agent.dhcp.agent [None req-7867baef-f20a-431a-97d3-bea38254697a - - - - - -] DHCP configuration for ports {'aba80ccc-f5c0-4dbd-b771-fb9b4d07071b'} is completed#033[00m Nov 28 05:04:43 localhost nova_compute[280168]: 2025-11-28 10:04:43.625 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:44.219 2 INFO neutron.agent.securitygroups_rpc [None req-364c5261-76b8-4bbe-a1a4-6cba1a17e718 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:04:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e149 e149: 6 total, 6 up, 6 in Nov 28 05:04:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:44.660 2 INFO neutron.agent.securitygroups_rpc [None req-32328951-8e5a-4539-abdc-e00522344c39 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:45 localhost nova_compute[280168]: 2025-11-28 10:04:45.159 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:45 localhost nova_compute[280168]: 2025-11-28 10:04:45.203 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:45.202 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:04:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:45.204 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:04:45 localhost nova_compute[280168]: 2025-11-28 10:04:45.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:45 localhost nova_compute[280168]: 2025-11-28 10:04:45.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:04:45 localhost nova_compute[280168]: 2025-11-28 10:04:45.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:04:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e150 e150: 6 total, 6 up, 6 in Nov 28 05:04:45 localhost nova_compute[280168]: 2025-11-28 10:04:45.257 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:04:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:04:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2309871379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:04:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:45 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:45.752 2 INFO neutron.agent.securitygroups_rpc [None req-bac2ff8c-e0f4-436a-a2ef-9ffa5731fa74 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:46.280 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:41Z, description=, device_id=65a5fc61-8378-41fb-8a6b-788254c76348, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=aba80ccc-f5c0-4dbd-b771-fb9b4d07071b, ip_allocation=immediate, mac_address=fa:16:3e:65:bc:e9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:04:36Z, description=, dns_domain=, id=857a46da-ae2c-48a0-8bc8-b100174874d8, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1918864759, port_security_enabled=True, project_id=8c66e098e4fb4a349dc2bb4293454135, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=1538, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1706, status=ACTIVE, subnets=['f694aabc-d0eb-4a4e-a738-daa5c1e20a97'], tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:37Z, vlan_transparent=None, network_id=857a46da-ae2c-48a0-8bc8-b100174874d8, port_security_enabled=False, project_id=8c66e098e4fb4a349dc2bb4293454135, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1753, status=DOWN, tags=[], tenant_id=8c66e098e4fb4a349dc2bb4293454135, updated_at=2025-11-28T10:04:42Z on network 857a46da-ae2c-48a0-8bc8-b100174874d8#033[00m Nov 28 05:04:46 localhost dnsmasq[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/addn_hosts - 1 addresses Nov 28 05:04:46 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/host Nov 28 05:04:46 localhost podman[313982]: 2025-11-28 10:04:46.493271125 +0000 UTC m=+0.056983808 container kill 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:04:46 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/opts Nov 28 05:04:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:46.813 261346 INFO neutron.agent.dhcp.agent [None req-78bf5c70-70d7-43b1-bd7c-bdc110a4b4a3 - - - - - -] DHCP configuration for ports {'aba80ccc-f5c0-4dbd-b771-fb9b4d07071b'} is completed#033[00m Nov 28 05:04:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:47.032 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:46Z, description=, device_id=5b709ff1-f495-452b-91d4-f8f4d4d33b79, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d6b610c6-cfc0-4922-be3e-a3490957a77c, ip_allocation=immediate, mac_address=fa:16:3e:c5:8a:e8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1763, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:46Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:47 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:04:47 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:47 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:47 localhost podman[314022]: 2025-11-28 10:04:47.257909799 +0000 UTC m=+0.063398553 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2) Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.264 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.265 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.266 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.266 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.267 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0. Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.309351) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324287309428, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2642, "num_deletes": 268, "total_data_size": 3875778, "memory_usage": 3940944, "flush_reason": "Manual Compaction"} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324287330802, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2510994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18412, "largest_seqno": 21049, "table_properties": {"data_size": 2500880, "index_size": 6491, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 22084, "raw_average_key_size": 21, "raw_value_size": 2480323, "raw_average_value_size": 2455, "num_data_blocks": 275, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324137, "oldest_key_time": 1764324137, "file_creation_time": 1764324287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 21531 microseconds, and 7615 cpu microseconds. Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.330881) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2510994 bytes OK Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.330916) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.333757) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.333792) EVENT_LOG_v1 {"time_micros": 1764324287333782, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.333823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3863916, prev total WAL file size 3863916, number of live WAL files 2. Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.335399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end) Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(2452KB)], [27(15MB)] Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324287335450, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18631635, "oldest_snapshot_seqno": -1} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12509 keys, 16635286 bytes, temperature: kUnknown Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324287444836, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 16635286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16564092, "index_size": 38837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31301, "raw_key_size": 334644, "raw_average_key_size": 26, "raw_value_size": 16351314, "raw_average_value_size": 1307, "num_data_blocks": 1476, "num_entries": 12509, "num_filter_entries": 12509, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.445200) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 16635286 bytes Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.446832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.2 rd, 152.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 15.4 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(14.0) write-amplify(6.6) OK, records in: 13056, records dropped: 547 output_compression: NoCompression Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.446855) EVENT_LOG_v1 {"time_micros": 1764324287446844, "job": 14, "event": "compaction_finished", "compaction_time_micros": 109477, "compaction_time_cpu_micros": 40511, "output_level": 6, "num_output_files": 1, "total_output_size": 16635286, "num_input_records": 13056, "num_output_records": 12509, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324287447361, "job": 14, "event": "table_file_deletion", "file_number": 29} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324287449259, "job": 14, "event": "table_file_deletion", "file_number": 27} Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.335299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.449303) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.449308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.449310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.449311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:04:47 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:04:47.449313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.490 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:47.596 261346 INFO neutron.agent.dhcp.agent [None req-26936482-318a-44eb-8e5b-97362f40d947 - - - - - -] DHCP configuration for ports {'d6b610c6-cfc0-4922-be3e-a3490957a77c'} is completed#033[00m Nov 28 05:04:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:04:47 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1032672972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:04:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.772 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.992 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.994 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11558MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.994 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:04:47 localhost nova_compute[280168]: 2025-11-28 10:04:47.995 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.251 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.252 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.279 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.651 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:04:48 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2551675878' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.731 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.737 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.751 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.755 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:04:48 localhost nova_compute[280168]: 2025-11-28 10:04:48.756 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:04:49 localhost nova_compute[280168]: 2025-11-28 10:04:49.581 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s Nov 28 05:04:50 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 e151: 6 total, 6 up, 6 in Nov 28 05:04:50 localhost nova_compute[280168]: 2025-11-28 10:04:50.756 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:04:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:50.848 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:04:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:50.848 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:04:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:50.849 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:04:51 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:04:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:51 localhost podman[314103]: 2025-11-28 10:04:51.029697559 +0000 UTC m=+0.058560376 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 05:04:51 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:51.141 2 INFO neutron.agent.securitygroups_rpc [None req-54cf6d1c-df90-4434-9ef1-afe91707ca30 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:51 localhost nova_compute[280168]: 2025-11-28 10:04:51.437 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 4.1 KiB/s wr, 53 op/s Nov 28 05:04:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:51 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:51.915 2 INFO neutron.agent.securitygroups_rpc [None req-263d211e-775e-47b3-9274-70437dee437e 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:04:52.205 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:04:52 localhost nova_compute[280168]: 2025-11-28 10:04:52.493 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:52 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:04:52 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:52 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:52 localhost podman[314141]: 2025-11-28 10:04:52.690790558 +0000 UTC m=+0.056203724 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:04:52 localhost nova_compute[280168]: 2025-11-28 10:04:52.728 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:53.310 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:52Z, description=, device_id=0f485132-8e02-47a3-b2f5-564423c3ef9b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=4e2523a4-d733-4b06-b5b8-e4949ed3c622, ip_allocation=immediate, mac_address=fa:16:3e:9b:d7:82, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1811, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:52Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:53 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:04:53 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:53 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:53 localhost podman[314179]: 2025-11-28 10:04:53.502168055 +0000 UTC m=+0.052425078 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:04:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:04:53 localhost podman[314194]: 2025-11-28 10:04:53.605580064 +0000 UTC m=+0.076704051 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible) Nov 28 05:04:53 localhost podman[314194]: 2025-11-28 10:04:53.616582712 +0000 UTC m=+0.087706719 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible) Nov 28 05:04:53 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:04:53 localhost nova_compute[280168]: 2025-11-28 10:04:53.653 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 3.6 KiB/s wr, 47 op/s Nov 28 05:04:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:53.797 261346 INFO neutron.agent.dhcp.agent [None req-3441817a-fc5b-4e4a-8786-ee6f4841c459 - - - - - -] DHCP configuration for ports {'4e2523a4-d733-4b06-b5b8-e4949ed3c622'} is completed#033[00m Nov 28 05:04:54 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:54.286 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:53Z, description=, device_id=0cdeddab-1ad3-4978-9351-14136638bcd9, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=024a63c9-8d6e-42ba-ba05-3bcff8cb8032, ip_allocation=immediate, mac_address=fa:16:3e:f7:75:8b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1817, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:53Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:54 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:04:54 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:54 localhost podman[314237]: 2025-11-28 10:04:54.514384317 +0000 UTC m=+0.044642518 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:04:54 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:54 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:54.767 261346 INFO neutron.agent.dhcp.agent [None req-37f8069a-5896-439b-9707-5a9e9576d5a7 - - - - - -] DHCP configuration for ports {'024a63c9-8d6e-42ba-ba05-3bcff8cb8032'} is completed#033[00m Nov 28 05:04:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 39 op/s Nov 28 05:04:55 localhost nova_compute[280168]: 2025-11-28 10:04:55.811 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:56.094 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:04:55Z, description=, device_id=ffb38a66-64a8-419b-8d9e-757a81603106, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d7fd03cb-eae4-46ed-aec3-7c90878a5f67, ip_allocation=immediate, mac_address=fa:16:3e:67:e1:81, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1826, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:04:55Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:04:56 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:04:56 localhost podman[314273]: 2025-11-28 10:04:56.377196519 +0000 UTC m=+0.057806432 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:04:56 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:04:56 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:04:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:04:56 localhost nova_compute[280168]: 2025-11-28 10:04:56.957 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:57 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:04:57.051 261346 INFO neutron.agent.dhcp.agent [None req-b756c8ea-e8e5-484d-9a70-ff54e365566f - - - - - -] DHCP configuration for ports {'d7fd03cb-eae4-46ed-aec3-7c90878a5f67'} is completed#033[00m Nov 28 05:04:57 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:57.062 2 INFO neutron.agent.securitygroups_rpc [None req-1b122d45-69b6-44ae-9f44-6255649c2a99 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:57 localhost nova_compute[280168]: 2025-11-28 10:04:57.496 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:57 localhost openstack_network_exporter[240973]: ERROR 10:04:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:04:57 localhost openstack_network_exporter[240973]: ERROR 10:04:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:04:57 localhost openstack_network_exporter[240973]: ERROR 10:04:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:04:57 localhost openstack_network_exporter[240973]: ERROR 10:04:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:04:57 localhost openstack_network_exporter[240973]: Nov 28 05:04:57 localhost openstack_network_exporter[240973]: ERROR 10:04:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:04:57 localhost openstack_network_exporter[240973]: Nov 28 05:04:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:04:58 localhost nova_compute[280168]: 2025-11-28 10:04:58.656 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:04:58 localhost neutron_sriov_agent[254415]: 2025-11-28 10:04:58.759 2 INFO neutron.agent.securitygroups_rpc [None req-d1d37070-b21e-47b1-9333-9a0acdf29e79 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:04:58 localhost podman[239012]: time="2025-11-28T10:04:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:04:58 localhost podman[239012]: @ - - [28/Nov/2025:10:04:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159978 "" "Go-http-client/1.1" Nov 28 05:04:58 localhost podman[239012]: @ - - [28/Nov/2025:10:04:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20153 "" "Go-http-client/1.1" Nov 28 05:04:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:00 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:00.096 2 INFO neutron.agent.securitygroups_rpc [None req-f8f0dbe9-4862-4719-841c-e92cc8d478e0 6aa2d31d87d44bd68f3306827a96cb84 7163b69fa8fc4d998fb494edfa303457 - - default default] Security group member updated ['d5ac7cb5-5e8f-446f-ac61-9c9e90d707c1']#033[00m Nov 28 05:05:00 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:05:00 localhost podman[314310]: 2025-11-28 10:05:00.715330076 +0000 UTC m=+0.067295034 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:05:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:00 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:01.223 2 INFO neutron.agent.securitygroups_rpc [None req-8e254d5a-cb3d-4d3c-8c34-ca57b16025b1 6aa2d31d87d44bd68f3306827a96cb84 7163b69fa8fc4d998fb494edfa303457 - - default default] Security group member updated ['d5ac7cb5-5e8f-446f-ac61-9c9e90d707c1']#033[00m Nov 28 05:05:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:01.970 2 INFO neutron.agent.securitygroups_rpc [None req-70ec53ac-b8b3-4633-a977-c5aaaab920ff 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:02 localhost nova_compute[280168]: 2025-11-28 10:05:02.357 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:02 localhost nova_compute[280168]: 2025-11-28 10:05:02.498 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:02.751 2 INFO neutron.agent.securitygroups_rpc [None req-472369ad-637e-463f-8142-3dff7a706106 6aa2d31d87d44bd68f3306827a96cb84 7163b69fa8fc4d998fb494edfa303457 - - default default] Security group member updated ['d5ac7cb5-5e8f-446f-ac61-9c9e90d707c1']#033[00m Nov 28 05:05:02 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:02.951 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:02Z, description=, device_id=77059af7-1071-4399-bb20-a4a44fadd0d4, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=50da1b3b-e236-4c33-b4c3-cea5ee9d8fff, ip_allocation=immediate, mac_address=fa:16:3e:48:f0:6e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1847, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:05:02Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:05:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:02.973 2 INFO neutron.agent.securitygroups_rpc [None req-a8642c38-ef8d-4c54-aca2-47d1d691b2fe 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:03 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:05:03 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:03 localhost podman[314350]: 2025-11-28 10:05:03.174128644 +0000 UTC m=+0.067470108 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:05:03 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:03.429 261346 INFO neutron.agent.dhcp.agent [None req-259efdca-8dd3-4612-a024-f75b0f6f029b - - - - - -] DHCP configuration for ports {'50da1b3b-e236-4c33-b4c3-cea5ee9d8fff'} is completed#033[00m Nov 28 05:05:03 localhost nova_compute[280168]: 2025-11-28 10:05:03.659 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:03.772 2 INFO neutron.agent.securitygroups_rpc [None req-551a0a6f-26c0-4f59-8a71-37fd214c141c 6aa2d31d87d44bd68f3306827a96cb84 7163b69fa8fc4d998fb494edfa303457 - - default default] Security group member updated ['d5ac7cb5-5e8f-446f-ac61-9c9e90d707c1']#033[00m Nov 28 05:05:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:05:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:05:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:05:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:05:03 localhost podman[314372]: 2025-11-28 10:05:03.977431293 +0000 UTC m=+0.079167927 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:04 localhost podman[314372]: 2025-11-28 10:05:04.013442197 +0000 UTC m=+0.115178831 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 05:05:04 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:05:04 localhost podman[314373]: 2025-11-28 10:05:04.032714518 +0000 UTC m=+0.130926064 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent) Nov 28 05:05:04 localhost podman[314373]: 2025-11-28 10:05:04.06149526 +0000 UTC m=+0.159706786 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 05:05:04 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:05:04 localhost podman[314377]: 2025-11-28 10:05:04.1541461 +0000 UTC m=+0.247377593 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:05:04 localhost podman[314377]: 2025-11-28 10:05:04.16395598 +0000 UTC m=+0.257187463 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:05:04 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:05:04 localhost systemd[1]: tmp-crun.g63zQj.mount: Deactivated successfully. Nov 28 05:05:04 localhost podman[314371]: 2025-11-28 10:05:04.261254663 +0000 UTC m=+0.365522974 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm) Nov 28 05:05:04 localhost podman[314371]: 2025-11-28 10:05:04.299851686 +0000 UTC m=+0.404119947 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm) Nov 28 05:05:04 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:05:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:05:05 Nov 28 05:05:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:05:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:05:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_metadata', 'vms', 'backups', 'images', '.mgr', 'manila_data', 'volumes'] Nov 28 05:05:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:05:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:05:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:05:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:05:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:05:05 localhost nova_compute[280168]: 2025-11-28 10:05:05.717 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:05:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32) Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:05:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:05:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:05:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:07 localhost dnsmasq[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/addn_hosts - 0 addresses Nov 28 05:05:07 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/host Nov 28 05:05:07 localhost dnsmasq-dhcp[313863]: read /var/lib/neutron/dhcp/857a46da-ae2c-48a0-8bc8-b100174874d8/opts Nov 28 05:05:07 localhost podman[314471]: 2025-11-28 10:05:07.422202449 +0000 UTC m=+0.063528918 container kill 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:05:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:05:07 localhost nova_compute[280168]: 2025-11-28 10:05:07.501 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:07 localhost systemd[1]: tmp-crun.w7YAX4.mount: Deactivated successfully. Nov 28 05:05:07 localhost podman[314484]: 2025-11-28 10:05:07.533275514 +0000 UTC m=+0.086928705 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:05:07 localhost podman[314484]: 2025-11-28 10:05:07.5478593 +0000 UTC m=+0.101512491 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:05:07 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:05:07 localhost ovn_controller[152726]: 2025-11-28T10:05:07Z|00105|binding|INFO|Releasing lport 0dd1eafc-23dc-4156-9d9f-2142e93fc855 from this chassis (sb_readonly=0) Nov 28 05:05:07 localhost ovn_controller[152726]: 2025-11-28T10:05:07Z|00106|binding|INFO|Setting lport 0dd1eafc-23dc-4156-9d9f-2142e93fc855 down in Southbound Nov 28 05:05:07 localhost nova_compute[280168]: 2025-11-28 10:05:07.639 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:07 localhost kernel: device tap0dd1eafc-23 left promiscuous mode Nov 28 05:05:07 localhost nova_compute[280168]: 2025-11-28 10:05:07.648 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:07 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:07.648 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.102.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-857a46da-ae2c-48a0-8bc8-b100174874d8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-857a46da-ae2c-48a0-8bc8-b100174874d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c66e098e4fb4a349dc2bb4293454135', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6731419-dcc4-4fe4-a3f7-f974e0596a7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0dd1eafc-23dc-4156-9d9f-2142e93fc855) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:07 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:07.651 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 0dd1eafc-23dc-4156-9d9f-2142e93fc855 in datapath 857a46da-ae2c-48a0-8bc8-b100174874d8 unbound from our chassis#033[00m Nov 28 05:05:07 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:07.654 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 857a46da-ae2c-48a0-8bc8-b100174874d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:05:07 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:07.655 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[74bf3411-3699-461f-85ba-a80bd4dbb800]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:07 localhost nova_compute[280168]: 2025-11-28 10:05:07.663 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:07 localhost dnsmasq[313863]: exiting on receipt of SIGTERM Nov 28 05:05:07 localhost podman[314532]: 2025-11-28 10:05:07.889171961 +0000 UTC m=+0.057007829 container kill 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:07 localhost systemd[1]: libpod-7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618.scope: Deactivated successfully. Nov 28 05:05:07 localhost podman[314545]: 2025-11-28 10:05:07.962458147 +0000 UTC m=+0.057193353 container died 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:05:08 localhost podman[314545]: 2025-11-28 10:05:08.042715917 +0000 UTC m=+0.137451043 container cleanup 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:05:08 localhost systemd[1]: libpod-conmon-7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618.scope: Deactivated successfully. Nov 28 05:05:08 localhost podman[314546]: 2025-11-28 10:05:08.067729354 +0000 UTC m=+0.152323140 container remove 7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-857a46da-ae2c-48a0-8bc8-b100174874d8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 28 05:05:08 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:08.095 261346 INFO neutron.agent.dhcp.agent [None req-e6c3cbae-7101-4302-a706-0f6eaeffc841 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:08 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:08.096 261346 INFO neutron.agent.dhcp.agent [None req-e6c3cbae-7101-4302-a706-0f6eaeffc841 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:08 localhost nova_compute[280168]: 2025-11-28 10:05:08.136 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:08.240 2 INFO neutron.agent.securitygroups_rpc [None req-687de3fc-aa2f-4499-970c-3ba0a56c0388 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:05:08 localhost systemd[1]: var-lib-containers-storage-overlay-33481160fabcc385d997dc82bdb19e1c1a0e4fb16542a4ec52812ba2f61cc168-merged.mount: Deactivated successfully. Nov 28 05:05:08 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d9055d2c408a209194a629fb271361b17d5a881d5c8d65e14bd567976a0b618-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:08 localhost systemd[1]: run-netns-qdhcp\x2d857a46da\x2dae2c\x2d48a0\x2d8bc8\x2db100174874d8.mount: Deactivated successfully. Nov 28 05:05:08 localhost nova_compute[280168]: 2025-11-28 10:05:08.686 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:09 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:05:09 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:09 localhost systemd[1]: tmp-crun.4nqT4t.mount: Deactivated successfully. Nov 28 05:05:09 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:09 localhost podman[314602]: 2025-11-28 10:05:09.017223623 +0000 UTC m=+0.118852723 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:05:09 localhost dnsmasq[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/addn_hosts - 0 addresses Nov 28 05:05:09 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/host Nov 28 05:05:09 localhost dnsmasq-dhcp[313513]: read /var/lib/neutron/dhcp/d81fff26-c58f-4d58-a4c3-379fa25c0b56/opts Nov 28 05:05:09 localhost podman[314614]: 2025-11-28 10:05:09.066485914 +0000 UTC m=+0.102637947 container kill db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:05:09 localhost nova_compute[280168]: 2025-11-28 10:05:09.304 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:09 localhost ovn_controller[152726]: 2025-11-28T10:05:09Z|00107|binding|INFO|Releasing lport 48dc601b-9dc3-45c9-9b98-4c07536959fd from this chassis (sb_readonly=0) Nov 28 05:05:09 localhost ovn_controller[152726]: 2025-11-28T10:05:09Z|00108|binding|INFO|Setting lport 48dc601b-9dc3-45c9-9b98-4c07536959fd down in Southbound Nov 28 05:05:09 localhost kernel: device tap48dc601b-9d left promiscuous mode Nov 28 05:05:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:09.324 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.101.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-d81fff26-c58f-4d58-a4c3-379fa25c0b56', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d81fff26-c58f-4d58-a4c3-379fa25c0b56', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c66e098e4fb4a349dc2bb4293454135', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3c8780a7-df1b-4611-af44-f100aaf1ce7e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=48dc601b-9dc3-45c9-9b98-4c07536959fd) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:09.326 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 48dc601b-9dc3-45c9-9b98-4c07536959fd in datapath d81fff26-c58f-4d58-a4c3-379fa25c0b56 unbound from our chassis#033[00m Nov 28 05:05:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:09.329 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d81fff26-c58f-4d58-a4c3-379fa25c0b56, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:05:09 localhost nova_compute[280168]: 2025-11-28 10:05:09.330 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:09.331 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c9930ccb-3087-4ef5-b59e-e78b2ac12929]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:10.362 2 INFO neutron.agent.securitygroups_rpc [None req-20bc108e-ba14-415c-a7d5-38bda2943a27 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:10 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:10.449 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:09Z, description=, device_id=2e7ac6be-2b29-43ff-882f-d19806f73241, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=0e9b8c70-8289-4af3-939d-c2588ce1fe4f, ip_allocation=immediate, mac_address=fa:16:3e:98:1d:06, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1877, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:05:10Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:05:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:10.460 2 INFO neutron.agent.securitygroups_rpc [None req-dee3de84-6e3e-4d2c-b4d4-a44f07424a62 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:10 localhost dnsmasq[313513]: exiting on receipt of SIGTERM Nov 28 05:05:10 localhost podman[314662]: 2025-11-28 10:05:10.468265695 +0000 UTC m=+0.066487488 container kill db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:05:10 localhost systemd[1]: libpod-db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24.scope: Deactivated successfully. Nov 28 05:05:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:05:10 localhost podman[314676]: 2025-11-28 10:05:10.555388376 +0000 UTC m=+0.064825568 container died db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:10 localhost systemd[1]: tmp-crun.ooNQSQ.mount: Deactivated successfully. Nov 28 05:05:10 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:10 localhost podman[314676]: 2025-11-28 10:05:10.609635709 +0000 UTC m=+0.119072831 container remove db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d81fff26-c58f-4d58-a4c3-379fa25c0b56, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:10 localhost systemd[1]: libpod-conmon-db4f1e73011f5afa73a2974ecc8d11b3610a5c42e770d1cf01bae7f0e9edad24.scope: Deactivated successfully. Nov 28 05:05:10 localhost podman[314682]: 2025-11-28 10:05:10.687084923 +0000 UTC m=+0.189598342 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 05:05:10 localhost podman[314682]: 2025-11-28 10:05:10.702303789 +0000 UTC m=+0.204817188 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 05:05:10 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:05:10 localhost podman[314726]: 2025-11-28 10:05:10.755512989 +0000 UTC m=+0.076801074 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:10 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:05:10 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:10 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:10 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:10.892 261346 INFO neutron.agent.dhcp.agent [None req-fc46b6ff-7bfc-4f5f-9579-0a2a8e3afd29 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:10 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:10.893 261346 INFO neutron.agent.dhcp.agent [None req-fc46b6ff-7bfc-4f5f-9579-0a2a8e3afd29 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:11 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:11.026 261346 INFO neutron.agent.dhcp.agent [None req-f065876f-5691-4382-ba6a-aa358a5474a4 - - - - - -] DHCP configuration for ports {'0e9b8c70-8289-4af3-939d-c2588ce1fe4f'} is completed#033[00m Nov 28 05:05:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:11.365 2 INFO neutron.agent.securitygroups_rpc [None req-dada4afd-3538-47ee-91fc-3d547e9c2d44 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:11 localhost systemd[1]: var-lib-containers-storage-overlay-54a4e230fd4a13b47ad5b4e150539a4b42d7ec5ed223aa986059699bf0cd00cc-merged.mount: Deactivated successfully. Nov 28 05:05:11 localhost systemd[1]: run-netns-qdhcp\x2dd81fff26\x2dc58f\x2d4d58\x2da4c3\x2d379fa25c0b56.mount: Deactivated successfully. Nov 28 05:05:11 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:11.569 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:10Z, description=, device_id=382e1b3d-fdff-4aa5-8e88-248b9691853e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=8c329718-f2f6-443f-b03f-4395241be6de, ip_allocation=immediate, mac_address=fa:16:3e:d9:c8:d0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1878, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:05:11Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:05:11 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:11.574 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:11.652 2 INFO neutron.agent.securitygroups_rpc [None req-3c61a2a4-46f5-45d3-8f70-86c422c843ed 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:05:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:11.708 2 INFO neutron.agent.securitygroups_rpc [None req-bb3cde64-53ac-43b6-992c-b149fee8302c 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:11 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 6 addresses Nov 28 05:05:11 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:11 localhost systemd[1]: tmp-crun.pjQic2.mount: Deactivated successfully. Nov 28 05:05:11 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:11 localhost podman[314769]: 2025-11-28 10:05:11.766104903 +0000 UTC m=+0.061091714 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:05:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:11 localhost nova_compute[280168]: 2025-11-28 10:05:11.794 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:11.974 2 INFO neutron.agent.securitygroups_rpc [None req-2c04ab47-4041-48cd-a577-3ad3edc1e57b a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:11 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:11.984 261346 INFO neutron.agent.dhcp.agent [None req-5976b58a-662c-4a63-8160-641d43c9a1cf - - - - - -] DHCP configuration for ports {'8c329718-f2f6-443f-b03f-4395241be6de'} is completed#033[00m Nov 28 05:05:12 localhost nova_compute[280168]: 2025-11-28 10:05:12.047 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:12 localhost nova_compute[280168]: 2025-11-28 10:05:12.504 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:13.292 2 INFO neutron.agent.securitygroups_rpc [None req-865841f9-78fc-4d59-bed2-9c90df3d7ecb a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:05:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2371469984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:05:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:05:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2371469984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:05:13 localhost nova_compute[280168]: 2025-11-28 10:05:13.729 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:15 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:15.480 2 INFO neutron.agent.securitygroups_rpc [None req-95a3885e-d25b-4b35-9841-10ba9cd222bb a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:15 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:05:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:15 localhost podman[314806]: 2025-11-28 10:05:15.882920795 +0000 UTC m=+0.063396333 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:16 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:05:16 localhost podman[314846]: 2025-11-28 10:05:16.603206831 +0000 UTC m=+0.061198216 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:05:16 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:16 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:16 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:16.734 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:16 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:16.737 2 INFO neutron.agent.securitygroups_rpc [None req-8e314c79-26fe-4adc-939f-e587c4e62ad2 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:17 localhost nova_compute[280168]: 2025-11-28 10:05:17.508 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:17.846 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:17Z, description=, device_id=304ed48c-45db-4273-b511-259f5369a67b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2b7a5158-ba0d-40a0-a879-ecb41201e794, ip_allocation=immediate, mac_address=fa:16:3e:58:33:d9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1906, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:05:17Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:05:18 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:05:18 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:18 localhost podman[314883]: 2025-11-28 10:05:18.076545677 +0000 UTC m=+0.062102814 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:18 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:18 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:18.435 261346 INFO neutron.agent.dhcp.agent [None req-385723ac-76ee-47d3-8fce-105c83b23e4a - - - - - -] DHCP configuration for ports {'2b7a5158-ba0d-40a0-a879-ecb41201e794'} is completed#033[00m Nov 28 05:05:18 localhost nova_compute[280168]: 2025-11-28 10:05:18.734 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:18 localhost nova_compute[280168]: 2025-11-28 10:05:18.832 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:19 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:19.187 2 INFO neutron.agent.securitygroups_rpc [None req-8fd6935d-936a-4ee4-abda-f9675ff09d60 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail Nov 28 05:05:19 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:19.804 2 INFO neutron.agent.securitygroups_rpc [None req-5f6fc136-bb5c-4f44-93cc-8b8331d1b8c7 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:20 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:20.450 2 INFO neutron.agent.securitygroups_rpc [None req-08674235-5e89-42a7-aefb-bb66d7c4db90 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:20 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:20.937 2 INFO neutron.agent.securitygroups_rpc [None req-ff8acaea-6e1e-4ede-81e1-a7305639837a 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:21 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:21.296 2 INFO neutron.agent.securitygroups_rpc [None req-f5c2db0c-7ccc-47c0-8119-58c735104780 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:21 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:21.323 2 INFO neutron.agent.securitygroups_rpc [None req-baf56349-7def-4de1-b8e1-b12f76677166 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:05:21 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:21.426 2 INFO neutron.agent.securitygroups_rpc [None req-1ffc7eeb-b258-4bda-bcd4-9d4ef92a7b1f 48f54094604448288052e0203d18d8df 45fdbe27569f45449de58f1d1899ceea - - default default] Security group member updated ['0eb534d2-8077-4b6c-8550-0df9f702073c']#033[00m Nov 28 05:05:21 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:21.537 2 INFO neutron.agent.securitygroups_rpc [None req-1ffc7eeb-b258-4bda-bcd4-9d4ef92a7b1f 48f54094604448288052e0203d18d8df 45fdbe27569f45449de58f1d1899ceea - - default default] Security group member updated ['0eb534d2-8077-4b6c-8550-0df9f702073c']#033[00m Nov 28 05:05:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Nov 28 05:05:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:22 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:05:22 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:22 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:22 localhost podman[314973]: 2025-11-28 10:05:22.035147131 +0000 UTC m=+0.058906636 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:05:22 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:05:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:05:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:05:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:05:22 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 97fca34c-0541-418d-89cd-2d2423e80b02 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:05:22 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 97fca34c-0541-418d-89cd-2d2423e80b02 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:05:22 localhost ceph-mgr[286188]: [progress INFO root] Completed event 97fca34c-0541-418d-89cd-2d2423e80b02 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:05:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:05:22 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:05:22 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:22.275 2 INFO neutron.agent.securitygroups_rpc [None req-a89ecc9f-78c7-44a1-98c4-c8700080c0c0 4d2a0ee370da4e688f1d8f5c639f278d ac0f848cde7c47c998a8b80087a3b59d - - default default] Security group member updated ['6784caad-903b-4e5a-b2eb-6bb1d9f8710a']#033[00m Nov 28 05:05:22 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:05:22 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:05:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:22.456 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:22Z, description=, device_id=db8491a4-73bc-4598-975c-a7b74ee65365, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=b37d9acc-d566-4328-b5c8-2033650f3d16, ip_allocation=immediate, mac_address=fa:16:3e:35:f9:75, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1922, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:05:22Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:05:22 localhost nova_compute[280168]: 2025-11-28 10:05:22.509 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:22 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:22.556 2 INFO neutron.agent.securitygroups_rpc [None req-d94caea8-60f5-4f9f-b771-ee8db26744a5 48f54094604448288052e0203d18d8df 45fdbe27569f45449de58f1d1899ceea - - default default] Security group member updated ['0eb534d2-8077-4b6c-8550-0df9f702073c']#033[00m Nov 28 05:05:22 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 5 addresses Nov 28 05:05:22 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:22 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:22 localhost podman[315044]: 2025-11-28 10:05:22.69062034 +0000 UTC m=+0.058948687 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:05:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:22.932 261346 INFO neutron.agent.dhcp.agent [None req-6f9eb19a-9e92-4ddc-91ef-da613ff77b77 - - - - - -] DHCP configuration for ports {'b37d9acc-d566-4328-b5c8-2033650f3d16'} is completed#033[00m Nov 28 05:05:22 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:22.954 2 INFO neutron.agent.securitygroups_rpc [None req-e4fadf30-c514-41d2-a5af-51c101b5ad33 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:23 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:23.176 2 INFO neutron.agent.securitygroups_rpc [None req-e4e98f94-fcd2-4011-84de-9dbf12942472 48f54094604448288052e0203d18d8df 45fdbe27569f45449de58f1d1899ceea - - default default] Security group member updated ['0eb534d2-8077-4b6c-8550-0df9f702073c']#033[00m Nov 28 05:05:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:23.213 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:23 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:23.416 2 INFO neutron.agent.securitygroups_rpc [None req-ab356ba5-c3b5-409b-9ecc-7c688fb94166 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:23 localhost nova_compute[280168]: 2025-11-28 10:05:23.763 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Nov 28 05:05:23 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:23.806 2 INFO neutron.agent.securitygroups_rpc [None req-237dd594-9b00-4d69-9190-5464bb1ce820 a22fc7f25dd74349be1fe8842a517a9e e8063417cfc540ba8948734a0d2952d7 - - default default] Security group member updated ['f26ff442-2d58-414e-b33d-dcdad9bae629']#033[00m Nov 28 05:05:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:05:23 localhost podman[315064]: 2025-11-28 10:05:23.998360631 +0000 UTC m=+0.095571420 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7) Nov 28 05:05:24 localhost podman[315064]: 2025-11-28 10:05:24.011815793 +0000 UTC m=+0.109026552 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 28 05:05:24 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:05:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:24.441 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ad:87 2001:db8:0:1:f816:3eff:fef8:ad87'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3e1a061c-89bc-4b63-8b60-f49fb95addda, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ef9eb238-2b1e-49f7-8a0f-72efc8854e0f) old=Port_Binding(mac=['fa:16:3e:f8:ad:87 2001:db8::f816:3eff:fef8:ad87'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef8:ad87/64', 'neutron:device_id': 'ovnmeta-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8642adde-54ae-4fc2-b997-bf1962c6c7f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a8f8694ac11a4237ad168b64c39ca114', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:24.443 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ef9eb238-2b1e-49f7-8a0f-72efc8854e0f in datapath 8642adde-54ae-4fc2-b997-bf1962c6c7f1 updated#033[00m Nov 28 05:05:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:24.445 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8642adde-54ae-4fc2-b997-bf1962c6c7f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:05:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:24.446 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[a4102194-aa3e-407c-adac-55234b6b0d34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:25 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:25.479 2 INFO neutron.agent.securitygroups_rpc [None req-1ee68db6-3f9b-4e96-8e85-23a114c9c60e 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:25 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:05:25 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:25 localhost podman[315102]: 2025-11-28 10:05:25.718189721 +0000 UTC m=+0.063504978 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 05:05:25 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 783 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 596 B/s wr, 14 op/s Nov 28 05:05:25 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:05:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:05:26 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:05:26 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:26 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:26 localhost podman[315139]: 2025-11-28 10:05:26.417597767 +0000 UTC m=+0.055739460 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:05:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:26 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:05:26 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:26.906 2 INFO neutron.agent.securitygroups_rpc [None req-b7b686b0-df55-421e-8492-2a2961409c1a 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:27 localhost nova_compute[280168]: 2025-11-28 10:05:27.512 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:27 localhost openstack_network_exporter[240973]: ERROR 10:05:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:05:27 localhost openstack_network_exporter[240973]: ERROR 10:05:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:05:27 localhost openstack_network_exporter[240973]: ERROR 10:05:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:05:27 localhost openstack_network_exporter[240973]: ERROR 10:05:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:05:27 localhost openstack_network_exporter[240973]: Nov 28 05:05:27 localhost openstack_network_exporter[240973]: ERROR 10:05:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:05:27 localhost openstack_network_exporter[240973]: Nov 28 05:05:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v283: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 42 op/s Nov 28 05:05:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:27.907 261346 INFO neutron.agent.linux.ip_lib [None req-b14b2469-1a83-47d7-a3c4-c4f8422d291d - - - - - -] Device tap263dd990-bb cannot be used as it has no MAC address#033[00m Nov 28 05:05:27 localhost nova_compute[280168]: 2025-11-28 10:05:27.930 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:27 localhost kernel: device tap263dd990-bb entered promiscuous mode Nov 28 05:05:27 localhost NetworkManager[5965]: [1764324327.9399] manager: (tap263dd990-bb): new Generic device (/org/freedesktop/NetworkManager/Devices/27) Nov 28 05:05:27 localhost ovn_controller[152726]: 2025-11-28T10:05:27Z|00109|binding|INFO|Claiming lport 263dd990-bba8-43be-a704-af8089f8d063 for this chassis. Nov 28 05:05:27 localhost ovn_controller[152726]: 2025-11-28T10:05:27Z|00110|binding|INFO|263dd990-bba8-43be-a704-af8089f8d063: Claiming unknown Nov 28 05:05:27 localhost nova_compute[280168]: 2025-11-28 10:05:27.939 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:27 localhost systemd-udevd[315170]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:05:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:27.949 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-76551b5f-5d3c-486b-8256-6697e6d961af', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76551b5f-5d3c-486b-8256-6697e6d961af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79c7da76f5894b10864b69d1961b95ab', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e093bf79-07f4-4a53-b10e-ba79c6e89219, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=263dd990-bba8-43be-a704-af8089f8d063) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:27.951 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 263dd990-bba8-43be-a704-af8089f8d063 in datapath 76551b5f-5d3c-486b-8256-6697e6d961af bound to our chassis#033[00m Nov 28 05:05:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:27.952 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 76551b5f-5d3c-486b-8256-6697e6d961af or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:27.953 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[d54fba90-9f1b-4dac-a58b-db288154cd34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:27 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:27 localhost ovn_controller[152726]: 2025-11-28T10:05:27Z|00111|binding|INFO|Setting lport 263dd990-bba8-43be-a704-af8089f8d063 ovn-installed in OVS Nov 28 05:05:27 localhost ovn_controller[152726]: 2025-11-28T10:05:27Z|00112|binding|INFO|Setting lport 263dd990-bba8-43be-a704-af8089f8d063 up in Southbound Nov 28 05:05:27 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:27 localhost nova_compute[280168]: 2025-11-28 10:05:27.983 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:27 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:27 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:27 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:28 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:28 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:28 localhost journal[228057]: ethtool ioctl error on tap263dd990-bb: No such device Nov 28 05:05:28 localhost nova_compute[280168]: 2025-11-28 10:05:28.024 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:28 localhost nova_compute[280168]: 2025-11-28 10:05:28.050 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:28 localhost nova_compute[280168]: 2025-11-28 10:05:28.766 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:28 localhost podman[315242]: Nov 28 05:05:28 localhost podman[315242]: 2025-11-28 10:05:28.882151091 +0000 UTC m=+0.084163831 container create b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:05:28 localhost podman[239012]: time="2025-11-28T10:05:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:05:28 localhost systemd[1]: Started libpod-conmon-b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4.scope. Nov 28 05:05:28 localhost podman[315242]: 2025-11-28 10:05:28.841127943 +0000 UTC m=+0.043140683 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:05:28 localhost systemd[1]: Started libcrun container. Nov 28 05:05:28 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f596bc5e7c28d7a22b37b15d00f7baa6c61a1b94d6dc8c55a4dae31f2f97828d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:05:28 localhost podman[315242]: 2025-11-28 10:05:28.959272864 +0000 UTC m=+0.161285564 container init b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:28 localhost podman[315242]: 2025-11-28 10:05:28.968448325 +0000 UTC m=+0.170461025 container start b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:05:28 localhost podman[239012]: @ - - [28/Nov/2025:10:05:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158143 "" "Go-http-client/1.1" Nov 28 05:05:28 localhost dnsmasq[315260]: started, version 2.85 cachesize 150 Nov 28 05:05:28 localhost dnsmasq[315260]: DNS service limited to local subnets Nov 28 05:05:28 localhost dnsmasq[315260]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:05:28 localhost dnsmasq[315260]: warning: no upstream servers configured Nov 28 05:05:28 localhost dnsmasq-dhcp[315260]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:05:28 localhost dnsmasq[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/addn_hosts - 0 addresses Nov 28 05:05:28 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/host Nov 28 05:05:28 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/opts Nov 28 05:05:29 localhost podman[239012]: @ - - [28/Nov/2025:10:05:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19672 "" "Go-http-client/1.1" Nov 28 05:05:29 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:29.214 261346 INFO neutron.agent.dhcp.agent [None req-4eb272e4-06b1-4cdc-acc8-b7176b6e35dc - - - - - -] DHCP configuration for ports {'0a750d38-4e6c-43e8-93df-caac4798b0d3'} is completed#033[00m Nov 28 05:05:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 42 op/s Nov 28 05:05:30 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:30.550 2 INFO neutron.agent.securitygroups_rpc [None req-3bea7d0a-d883-4475-b5cf-e7689fbb3c41 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:31 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:31.134 261346 INFO neutron.agent.linux.ip_lib [None req-52264fc1-85c6-49e7-bf5f-d4bc4fa204b0 - - - - - -] Device tap6e1d8ae7-e5 cannot be used as it has no MAC address#033[00m Nov 28 05:05:31 localhost nova_compute[280168]: 2025-11-28 10:05:31.158 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:31 localhost kernel: device tap6e1d8ae7-e5 entered promiscuous mode Nov 28 05:05:31 localhost NetworkManager[5965]: [1764324331.1663] manager: (tap6e1d8ae7-e5): new Generic device (/org/freedesktop/NetworkManager/Devices/28) Nov 28 05:05:31 localhost nova_compute[280168]: 2025-11-28 10:05:31.169 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:31 localhost ovn_controller[152726]: 2025-11-28T10:05:31Z|00113|binding|INFO|Claiming lport 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 for this chassis. Nov 28 05:05:31 localhost ovn_controller[152726]: 2025-11-28T10:05:31Z|00114|binding|INFO|6e1d8ae7-e505-45e5-a475-e8b27c1cbd76: Claiming unknown Nov 28 05:05:31 localhost systemd-udevd[315271]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:05:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:31.178 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-a9e7bee2-adb4-4094-93fb-3cf35621d144', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9e7bee2-adb4-4094-93fb-3cf35621d144', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79c7da76f5894b10864b69d1961b95ab', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b44c93-fed4-4a4f-bdce-cdd61f440a29, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6e1d8ae7-e505-45e5-a475-e8b27c1cbd76) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:31.180 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 in datapath a9e7bee2-adb4-4094-93fb-3cf35621d144 bound to our chassis#033[00m Nov 28 05:05:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:31.182 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a9e7bee2-adb4-4094-93fb-3cf35621d144 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:31.182 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[65397a8c-f46d-453e-b6ff-6eed8ff6cf8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:31.197 2 INFO neutron.agent.securitygroups_rpc [None req-de2b1d93-259b-466a-9f3c-b35e66480eb7 1170e00b193540e48a34e21b33f08b7a a8f8694ac11a4237ad168b64c39ca114 - - default default] Security group member updated ['7f513fd2-7d14-47f7-892b-9f7b5ed2e3c4']#033[00m Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost ovn_controller[152726]: 2025-11-28T10:05:31Z|00115|binding|INFO|Setting lport 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 ovn-installed in OVS Nov 28 05:05:31 localhost ovn_controller[152726]: 2025-11-28T10:05:31Z|00116|binding|INFO|Setting lport 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 up in Southbound Nov 28 05:05:31 localhost nova_compute[280168]: 2025-11-28 10:05:31.200 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost journal[228057]: ethtool ioctl error on tap6e1d8ae7-e5: No such device Nov 28 05:05:31 localhost nova_compute[280168]: 2025-11-28 10:05:31.242 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:31 localhost nova_compute[280168]: 2025-11-28 10:05:31.273 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v285: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.0 KiB/s wr, 59 op/s Nov 28 05:05:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:32 localhost podman[315342]: Nov 28 05:05:32 localhost podman[315342]: 2025-11-28 10:05:32.105850721 +0000 UTC m=+0.090567566 container create d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:32 localhost systemd[1]: Started libpod-conmon-d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4.scope. Nov 28 05:05:32 localhost systemd[1]: Started libcrun container. Nov 28 05:05:32 localhost podman[315342]: 2025-11-28 10:05:32.061343637 +0000 UTC m=+0.046060522 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:05:32 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b06168b56e6b283e5d79caea478a11e9296a7b7a714e7c8158e200f6a95c8f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:05:32 localhost podman[315342]: 2025-11-28 10:05:32.17335136 +0000 UTC m=+0.158068205 container init d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125) Nov 28 05:05:32 localhost podman[315342]: 2025-11-28 10:05:32.187204105 +0000 UTC m=+0.171920960 container start d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:32 localhost dnsmasq[315360]: started, version 2.85 cachesize 150 Nov 28 05:05:32 localhost dnsmasq[315360]: DNS service limited to local subnets Nov 28 05:05:32 localhost dnsmasq[315360]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:05:32 localhost dnsmasq[315360]: warning: no upstream servers configured Nov 28 05:05:32 localhost dnsmasq-dhcp[315360]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:05:32 localhost dnsmasq[315360]: read /var/lib/neutron/dhcp/a9e7bee2-adb4-4094-93fb-3cf35621d144/addn_hosts - 0 addresses Nov 28 05:05:32 localhost dnsmasq-dhcp[315360]: read /var/lib/neutron/dhcp/a9e7bee2-adb4-4094-93fb-3cf35621d144/host Nov 28 05:05:32 localhost dnsmasq-dhcp[315360]: read /var/lib/neutron/dhcp/a9e7bee2-adb4-4094-93fb-3cf35621d144/opts Nov 28 05:05:32 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:32.369 261346 INFO neutron.agent.dhcp.agent [None req-42100a45-6df4-4646-912d-569842e8287a - - - - - -] DHCP configuration for ports {'c0212f63-6473-42a7-8ffc-e5ce666be6b1'} is completed#033[00m Nov 28 05:05:32 localhost nova_compute[280168]: 2025-11-28 10:05:32.550 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:33.244 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v286: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.4 KiB/s wr, 45 op/s Nov 28 05:05:33 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:33.775 2 INFO neutron.agent.securitygroups_rpc [None req-04197ece-4fb3-43df-90bb-b0e309825a8e cb58c56533984a129050414c9b160b63 de1aeac8abd545fcb83eb3ee06f16689 - - default default] Security group member updated ['805fa77a-da24-42d8-9154-db9402b01c3e']#033[00m Nov 28 05:05:33 localhost nova_compute[280168]: 2025-11-28 10:05:33.799 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:33.838 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:33Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=908c0c28-dfd3-45f3-9a9c-b54daa8aaee7, ip_allocation=immediate, mac_address=fa:16:3e:bd:c0:a8, name=tempest-RoutersAdminNegativeIpV6Test-1418326863, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=True, project_id=de1aeac8abd545fcb83eb3ee06f16689, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['805fa77a-da24-42d8-9154-db9402b01c3e'], standard_attr_id=1991, status=DOWN, tags=[], tenant_id=de1aeac8abd545fcb83eb3ee06f16689, updated_at=2025-11-28T10:05:33Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:05:34 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 4 addresses Nov 28 05:05:34 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:34 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:34 localhost podman[315379]: 2025-11-28 10:05:34.073472446 +0000 UTC m=+0.063494417 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 28 05:05:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:05:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:05:34 localhost podman[315396]: 2025-11-28 10:05:34.19434403 +0000 UTC m=+0.086478121 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true) Nov 28 05:05:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:05:34 localhost podman[315395]: 2025-11-28 10:05:34.254636298 +0000 UTC m=+0.144958643 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251125) Nov 28 05:05:34 localhost podman[315396]: 2025-11-28 10:05:34.273539657 +0000 UTC m=+0.165673708 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 05:05:34 localhost podman[315429]: 2025-11-28 10:05:34.308700725 +0000 UTC m=+0.088339038 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:05:34 localhost podman[315395]: 2025-11-28 10:05:34.309552201 +0000 UTC m=+0.199874526 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 05:05:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:05:34 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:05:34 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:05:34 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:34.343 261346 INFO neutron.agent.dhcp.agent [None req-fe96075c-60cc-4c18-a44e-fcb71d2e90bb - - - - - -] DHCP configuration for ports {'908c0c28-dfd3-45f3-9a9c-b54daa8aaee7'} is completed#033[00m Nov 28 05:05:34 localhost podman[315429]: 2025-11-28 10:05:34.39368268 +0000 UTC m=+0.173320993 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:05:34 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:05:34 localhost podman[315463]: 2025-11-28 10:05:34.416156048 +0000 UTC m=+0.072643897 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true) Nov 28 05:05:34 localhost podman[315463]: 2025-11-28 10:05:34.426858707 +0000 UTC m=+0.083346546 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Nov 28 05:05:34 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:05:35 localhost systemd[1]: tmp-crun.q1a27R.mount: Deactivated successfully. Nov 28 05:05:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:05:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:05:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:05:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:05:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:05:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:05:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.4 KiB/s wr, 45 op/s Nov 28 05:05:35 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:35.883 2 INFO neutron.agent.securitygroups_rpc [None req-9c55fb28-c01b-40d6-8fec-099f9b722777 cb58c56533984a129050414c9b160b63 de1aeac8abd545fcb83eb3ee06f16689 - - default default] Security group member updated ['805fa77a-da24-42d8-9154-db9402b01c3e']#033[00m Nov 28 05:05:36 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:05:36 localhost podman[315502]: 2025-11-28 10:05:36.106233546 +0000 UTC m=+0.057763610 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:36 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:36 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e152 e152: 6 total, 6 up, 6 in Nov 28 05:05:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:37 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:05:37 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:37 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:37 localhost podman[315540]: 2025-11-28 10:05:37.056698767 +0000 UTC m=+0.067067496 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.098 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:37.144 261346 INFO neutron.agent.linux.ip_lib [None req-4ac1aed3-fa52-4606-9efc-ba3ad05f7bfc - - - - - -] Device tap7525b617-bc cannot be used as it has no MAC address#033[00m Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.199 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost kernel: device tap7525b617-bc entered promiscuous mode Nov 28 05:05:37 localhost ovn_controller[152726]: 2025-11-28T10:05:37Z|00117|binding|INFO|Claiming lport 7525b617-bc2f-464e-a494-d304810bfb3d for this chassis. Nov 28 05:05:37 localhost ovn_controller[152726]: 2025-11-28T10:05:37Z|00118|binding|INFO|7525b617-bc2f-464e-a494-d304810bfb3d: Claiming unknown Nov 28 05:05:37 localhost NetworkManager[5965]: [1764324337.2080] manager: (tap7525b617-bc): new Generic device (/org/freedesktop/NetworkManager/Devices/29) Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.208 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost systemd-udevd[315573]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:05:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:37.218 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-a168791a-0c5a-4a6f-ab00-175cc6c1bb37', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a168791a-0c5a-4a6f-ab00-175cc6c1bb37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79c7da76f5894b10864b69d1961b95ab', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abd1fad4-7cc2-47af-bcf3-845289d3423e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7525b617-bc2f-464e-a494-d304810bfb3d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:37.219 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 7525b617-bc2f-464e-a494-d304810bfb3d in datapath a168791a-0c5a-4a6f-ab00-175cc6c1bb37 bound to our chassis#033[00m Nov 28 05:05:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:37.221 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a168791a-0c5a-4a6f-ab00-175cc6c1bb37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:37.222 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[b5fc9f31-7d5e-4b46-8118-afbd48211107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost ovn_controller[152726]: 2025-11-28T10:05:37Z|00119|binding|INFO|Setting lport 7525b617-bc2f-464e-a494-d304810bfb3d ovn-installed in OVS Nov 28 05:05:37 localhost ovn_controller[152726]: 2025-11-28T10:05:37Z|00120|binding|INFO|Setting lport 7525b617-bc2f-464e-a494-d304810bfb3d up in Southbound Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.246 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.248 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost journal[228057]: ethtool ioctl error on tap7525b617-bc: No such device Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.277 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.299 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost nova_compute[280168]: 2025-11-28 10:05:37.552 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 76 op/s Nov 28 05:05:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:05:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:37.935 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:37Z, description=, device_id=773237be-aa27-4adb-a0c5-c56bdc56f175, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fdf90593-78c6-4222-ac7b-fcd71032b0ab, ip_allocation=immediate, mac_address=fa:16:3e:09:d3:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:05:25Z, description=, dns_domain=, id=76551b5f-5d3c-486b-8256-6697e6d961af, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1822665596, port_security_enabled=True, project_id=79c7da76f5894b10864b69d1961b95ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34453, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1947, status=ACTIVE, subnets=['3c53e558-3975-44a5-a65d-1cc78bc25b73'], tags=[], tenant_id=79c7da76f5894b10864b69d1961b95ab, updated_at=2025-11-28T10:05:26Z, vlan_transparent=None, network_id=76551b5f-5d3c-486b-8256-6697e6d961af, port_security_enabled=False, project_id=79c7da76f5894b10864b69d1961b95ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2014, status=DOWN, tags=[], tenant_id=79c7da76f5894b10864b69d1961b95ab, updated_at=2025-11-28T10:05:37Z on network 76551b5f-5d3c-486b-8256-6697e6d961af#033[00m Nov 28 05:05:38 localhost podman[315623]: 2025-11-28 10:05:38.020149035 +0000 UTC m=+0.131394318 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:05:38 localhost podman[315623]: 2025-11-28 10:05:38.032487853 +0000 UTC m=+0.143733186 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:05:38 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:05:38 localhost podman[315661]: Nov 28 05:05:38 localhost podman[315661]: 2025-11-28 10:05:38.085200509 +0000 UTC m=+0.098202321 container create 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:05:38 localhost systemd[1]: Started libpod-conmon-8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a.scope. Nov 28 05:05:38 localhost podman[315661]: 2025-11-28 10:05:38.032870795 +0000 UTC m=+0.045872587 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:05:38 localhost systemd[1]: Started libcrun container. Nov 28 05:05:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24636b78bd77589d490104c67446063eaf3f979c2e97bf5584acb9bfe1dedc30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:05:38 localhost dnsmasq[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/addn_hosts - 1 addresses Nov 28 05:05:38 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/host Nov 28 05:05:38 localhost podman[315696]: 2025-11-28 10:05:38.167417458 +0000 UTC m=+0.050750466 container kill b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:05:38 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/opts Nov 28 05:05:38 localhost podman[315661]: 2025-11-28 10:05:38.208990953 +0000 UTC m=+0.221992715 container init 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:38 localhost podman[315661]: 2025-11-28 10:05:38.215915374 +0000 UTC m=+0.228917166 container start 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:38 localhost dnsmasq[315714]: started, version 2.85 cachesize 150 Nov 28 05:05:38 localhost dnsmasq[315714]: DNS service limited to local subnets Nov 28 05:05:38 localhost dnsmasq[315714]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:05:38 localhost dnsmasq[315714]: warning: no upstream servers configured Nov 28 05:05:38 localhost dnsmasq-dhcp[315714]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d Nov 28 05:05:38 localhost dnsmasq[315714]: read /var/lib/neutron/dhcp/a168791a-0c5a-4a6f-ab00-175cc6c1bb37/addn_hosts - 0 addresses Nov 28 05:05:38 localhost dnsmasq-dhcp[315714]: read /var/lib/neutron/dhcp/a168791a-0c5a-4a6f-ab00-175cc6c1bb37/host Nov 28 05:05:38 localhost dnsmasq-dhcp[315714]: read /var/lib/neutron/dhcp/a168791a-0c5a-4a6f-ab00-175cc6c1bb37/opts Nov 28 05:05:38 localhost nova_compute[280168]: 2025-11-28 10:05:38.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:38 localhost nova_compute[280168]: 2025-11-28 10:05:38.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 05:05:38 localhost nova_compute[280168]: 2025-11-28 10:05:38.271 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 05:05:38 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:38.419 261346 INFO neutron.agent.dhcp.agent [None req-e470f9a7-ccdc-4519-b34e-49ce2eb028b4 - - - - - -] DHCP configuration for ports {'fdf90593-78c6-4222-ac7b-fcd71032b0ab', '84f970fc-aab5-4a93-b320-463d8d4ba76e'} is completed#033[00m Nov 28 05:05:38 localhost nova_compute[280168]: 2025-11-28 10:05:38.803 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:39 localhost systemd[1]: tmp-crun.9OK8bS.mount: Deactivated successfully. Nov 28 05:05:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:39.099 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:37Z, description=, device_id=773237be-aa27-4adb-a0c5-c56bdc56f175, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=fdf90593-78c6-4222-ac7b-fcd71032b0ab, ip_allocation=immediate, mac_address=fa:16:3e:09:d3:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:05:25Z, description=, dns_domain=, id=76551b5f-5d3c-486b-8256-6697e6d961af, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeIpV6Test-test-network-1822665596, port_security_enabled=True, project_id=79c7da76f5894b10864b69d1961b95ab, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=34453, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1947, status=ACTIVE, subnets=['3c53e558-3975-44a5-a65d-1cc78bc25b73'], tags=[], tenant_id=79c7da76f5894b10864b69d1961b95ab, updated_at=2025-11-28T10:05:26Z, vlan_transparent=None, network_id=76551b5f-5d3c-486b-8256-6697e6d961af, port_security_enabled=False, project_id=79c7da76f5894b10864b69d1961b95ab, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2014, status=DOWN, tags=[], tenant_id=79c7da76f5894b10864b69d1961b95ab, updated_at=2025-11-28T10:05:37Z on network 76551b5f-5d3c-486b-8256-6697e6d961af#033[00m Nov 28 05:05:39 localhost podman[315739]: 2025-11-28 10:05:39.327268726 +0000 UTC m=+0.083425308 container kill b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Nov 28 05:05:39 localhost systemd[1]: tmp-crun.44JQCJ.mount: Deactivated successfully. Nov 28 05:05:39 localhost dnsmasq[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/addn_hosts - 1 addresses Nov 28 05:05:39 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/host Nov 28 05:05:39 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/opts Nov 28 05:05:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:05:39 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3980002561' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:05:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:05:39 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3980002561' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:05:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:39.680 261346 INFO neutron.agent.dhcp.agent [None req-bad90eee-5e0c-4cbf-a1ed-730e9f04425a - - - - - -] DHCP configuration for ports {'fdf90593-78c6-4222-ac7b-fcd71032b0ab'} is completed#033[00m Nov 28 05:05:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v290: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 76 op/s Nov 28 05:05:40 localhost podman[315778]: 2025-11-28 10:05:40.706543607 +0000 UTC m=+0.056958856 container kill b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:40 localhost dnsmasq[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/addn_hosts - 0 addresses Nov 28 05:05:40 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/host Nov 28 05:05:40 localhost dnsmasq-dhcp[315260]: read /var/lib/neutron/dhcp/76551b5f-5d3c-486b-8256-6697e6d961af/opts Nov 28 05:05:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:05:40 localhost ovn_controller[152726]: 2025-11-28T10:05:40Z|00121|binding|INFO|Releasing lport 263dd990-bba8-43be-a704-af8089f8d063 from this chassis (sb_readonly=0) Nov 28 05:05:40 localhost ovn_controller[152726]: 2025-11-28T10:05:40Z|00122|binding|INFO|Setting lport 263dd990-bba8-43be-a704-af8089f8d063 down in Southbound Nov 28 05:05:40 localhost nova_compute[280168]: 2025-11-28 10:05:40.928 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:40.937 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-76551b5f-5d3c-486b-8256-6697e6d961af', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76551b5f-5d3c-486b-8256-6697e6d961af', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79c7da76f5894b10864b69d1961b95ab', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e093bf79-07f4-4a53-b10e-ba79c6e89219, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=263dd990-bba8-43be-a704-af8089f8d063) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:40 localhost kernel: device tap263dd990-bb left promiscuous mode Nov 28 05:05:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:40.939 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 263dd990-bba8-43be-a704-af8089f8d063 in datapath 76551b5f-5d3c-486b-8256-6697e6d961af unbound from our chassis#033[00m Nov 28 05:05:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:40.942 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 76551b5f-5d3c-486b-8256-6697e6d961af or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:40 localhost nova_compute[280168]: 2025-11-28 10:05:40.941 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:40 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:40.943 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[e1cfdf71-2016-45f4-9a8e-dd91f2bb3ef8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:40 localhost nova_compute[280168]: 2025-11-28 10:05:40.955 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:40 localhost systemd[1]: tmp-crun.fgxX5K.mount: Deactivated successfully. Nov 28 05:05:41 localhost podman[315799]: 2025-11-28 10:05:40.999660451 +0000 UTC m=+0.103219394 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, tcib_managed=true) Nov 28 05:05:41 localhost podman[315799]: 2025-11-28 10:05:41.041532455 +0000 UTC m=+0.145091418 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:41 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:05:41 localhost nova_compute[280168]: 2025-11-28 10:05:41.270 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:41 localhost nova_compute[280168]: 2025-11-28 10:05:41.272 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:41 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:05:41 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:05:41 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:05:41 localhost podman[315835]: 2025-11-28 10:05:41.307119125 +0000 UTC m=+0.054952166 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:41 localhost dnsmasq[315714]: exiting on receipt of SIGTERM Nov 28 05:05:41 localhost podman[315871]: 2025-11-28 10:05:41.674147263 +0000 UTC m=+0.059407141 container kill 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:41 localhost systemd[1]: libpod-8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a.scope: Deactivated successfully. Nov 28 05:05:41 localhost podman[315883]: 2025-11-28 10:05:41.743040264 +0000 UTC m=+0.058445581 container died 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:41 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:41 localhost systemd[1]: var-lib-containers-storage-overlay-24636b78bd77589d490104c67446063eaf3f979c2e97bf5584acb9bfe1dedc30-merged.mount: Deactivated successfully. Nov 28 05:05:41 localhost podman[315883]: 2025-11-28 10:05:41.773651633 +0000 UTC m=+0.089056910 container cleanup 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:05:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.5 KiB/s wr, 84 op/s Nov 28 05:05:41 localhost systemd[1]: libpod-conmon-8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a.scope: Deactivated successfully. Nov 28 05:05:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:41 localhost podman[315890]: 2025-11-28 10:05:41.819059615 +0000 UTC m=+0.122707562 container remove 8dd6126469f09ec4480a793fbe3fb8029b57ea0ba328dc8ed4db77ac55e2d94a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a168791a-0c5a-4a6f-ab00-175cc6c1bb37, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:05:41 localhost ovn_controller[152726]: 2025-11-28T10:05:41Z|00123|binding|INFO|Releasing lport 7525b617-bc2f-464e-a494-d304810bfb3d from this chassis (sb_readonly=0) Nov 28 05:05:41 localhost ovn_controller[152726]: 2025-11-28T10:05:41Z|00124|binding|INFO|Setting lport 7525b617-bc2f-464e-a494-d304810bfb3d down in Southbound Nov 28 05:05:41 localhost kernel: device tap7525b617-bc left promiscuous mode Nov 28 05:05:41 localhost nova_compute[280168]: 2025-11-28 10:05:41.827 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:41.838 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-a168791a-0c5a-4a6f-ab00-175cc6c1bb37', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a168791a-0c5a-4a6f-ab00-175cc6c1bb37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79c7da76f5894b10864b69d1961b95ab', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abd1fad4-7cc2-47af-bcf3-845289d3423e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7525b617-bc2f-464e-a494-d304810bfb3d) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:41.840 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 7525b617-bc2f-464e-a494-d304810bfb3d in datapath a168791a-0c5a-4a6f-ab00-175cc6c1bb37 unbound from our chassis#033[00m Nov 28 05:05:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:41.841 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a168791a-0c5a-4a6f-ab00-175cc6c1bb37 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:41.842 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[95c89f6b-8365-4ab5-8024-d667af7571ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:41 localhost nova_compute[280168]: 2025-11-28 10:05:41.850 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:42 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:42.135 261346 INFO neutron.agent.dhcp.agent [None req-aa8571c8-c2fb-4cb0-a85a-a2ca3cd531f5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:42 localhost nova_compute[280168]: 2025-11-28 10:05:42.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:42 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:42.503 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:42 localhost nova_compute[280168]: 2025-11-28 10:05:42.555 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:42 localhost systemd[1]: run-netns-qdhcp\x2da168791a\x2d0c5a\x2d4a6f\x2dab00\x2d175cc6c1bb37.mount: Deactivated successfully. Nov 28 05:05:43 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:43.079 261346 INFO neutron.agent.linux.ip_lib [None req-852ff37c-0e8a-49ad-babb-d02a979c0f09 - - - - - -] Device tapda869e93-67 cannot be used as it has no MAC address#033[00m Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.106 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:43 localhost kernel: device tapda869e93-67 entered promiscuous mode Nov 28 05:05:43 localhost NetworkManager[5965]: [1764324343.1162] manager: (tapda869e93-67): new Generic device (/org/freedesktop/NetworkManager/Devices/30) Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.116 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:43 localhost ovn_controller[152726]: 2025-11-28T10:05:43Z|00125|binding|INFO|Claiming lport da869e93-67dd-47bf-bfed-34a8468836ea for this chassis. Nov 28 05:05:43 localhost ovn_controller[152726]: 2025-11-28T10:05:43Z|00126|binding|INFO|da869e93-67dd-47bf-bfed-34a8468836ea: Claiming unknown Nov 28 05:05:43 localhost systemd-udevd[315924]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:05:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:43.127 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-19e7252b-db63-489c-9a93-8026377ebe8c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19e7252b-db63-489c-9a93-8026377ebe8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79185418333d4a93b24c87e39a4a1847', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c54a0e-746c-4aa6-aa68-60e31c1cb041, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=da869e93-67dd-47bf-bfed-34a8468836ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:43.129 158530 INFO neutron.agent.ovn.metadata.agent [-] Port da869e93-67dd-47bf-bfed-34a8468836ea in datapath 19e7252b-db63-489c-9a93-8026377ebe8c bound to our chassis#033[00m Nov 28 05:05:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:43.130 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 19e7252b-db63-489c-9a93-8026377ebe8c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:43.131 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[18b95250-9a69-4a55-a8e6-342f71ad1d83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost ovn_controller[152726]: 2025-11-28T10:05:43Z|00127|binding|INFO|Setting lport da869e93-67dd-47bf-bfed-34a8468836ea ovn-installed in OVS Nov 28 05:05:43 localhost ovn_controller[152726]: 2025-11-28T10:05:43Z|00128|binding|INFO|Setting lport da869e93-67dd-47bf-bfed-34a8468836ea up in Southbound Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.158 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.162 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost journal[228057]: ethtool ioctl error on tapda869e93-67: No such device Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.200 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.229 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:05:43 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:43.439 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:43 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:43.605 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:43 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:43.761 2 INFO neutron.agent.securitygroups_rpc [None req-1640c1ae-d5d6-4903-a020-69a5c84dc198 e0f29aacf6a94315b178b4a16e3fd03d 79185418333d4a93b24c87e39a4a1847 - - default default] Security group member updated ['f7d47ffa-9780-427b-aaf2-f0de3a638f8a']#033[00m Nov 28 05:05:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v292: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.5 KiB/s wr, 84 op/s Nov 28 05:05:43 localhost nova_compute[280168]: 2025-11-28 10:05:43.806 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:44 localhost podman[315995]: Nov 28 05:05:44 localhost podman[315995]: 2025-11-28 10:05:44.037134414 +0000 UTC m=+0.086483651 container create fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:05:44 localhost nova_compute[280168]: 2025-11-28 10:05:44.090 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:44 localhost podman[315995]: 2025-11-28 10:05:44.003688959 +0000 UTC m=+0.053038306 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:05:44 localhost systemd[1]: Started libpod-conmon-fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18.scope. Nov 28 05:05:44 localhost systemd[1]: Started libcrun container. Nov 28 05:05:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/997025a36efb1e50d9ffe44c2974a8638074eb37fef508bc2be0cc49df956a21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:05:44 localhost podman[315995]: 2025-11-28 10:05:44.15741102 +0000 UTC m=+0.206760287 container init fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:44 localhost podman[315995]: 2025-11-28 10:05:44.174100582 +0000 UTC m=+0.223449859 container start fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:05:44 localhost dnsmasq[316013]: started, version 2.85 cachesize 150 Nov 28 05:05:44 localhost dnsmasq[316013]: DNS service limited to local subnets Nov 28 05:05:44 localhost dnsmasq[316013]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:05:44 localhost dnsmasq[316013]: warning: no upstream servers configured Nov 28 05:05:44 localhost dnsmasq-dhcp[316013]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:05:44 localhost dnsmasq[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/addn_hosts - 0 addresses Nov 28 05:05:44 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/host Nov 28 05:05:44 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/opts Nov 28 05:05:44 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:44.237 261346 INFO neutron.agent.dhcp.agent [None req-852ff37c-0e8a-49ad-babb-d02a979c0f09 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2266342c-b2ca-4cdd-98a9-2015ce48326f, ip_allocation=immediate, mac_address=fa:16:3e:42:cc:c5, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1360024416, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:05:39Z, description=, dns_domain=, id=19e7252b-db63-489c-9a93-8026377ebe8c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1227717976, port_security_enabled=True, project_id=79185418333d4a93b24c87e39a4a1847, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25537, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2036, status=ACTIVE, subnets=['e7013afe-a363-4f80-93ea-b1f16b2d44b3'], tags=[], tenant_id=79185418333d4a93b24c87e39a4a1847, updated_at=2025-11-28T10:05:41Z, vlan_transparent=None, network_id=19e7252b-db63-489c-9a93-8026377ebe8c, port_security_enabled=True, project_id=79185418333d4a93b24c87e39a4a1847, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f7d47ffa-9780-427b-aaf2-f0de3a638f8a'], standard_attr_id=2065, status=DOWN, tags=[], tenant_id=79185418333d4a93b24c87e39a4a1847, updated_at=2025-11-28T10:05:43Z on network 19e7252b-db63-489c-9a93-8026377ebe8c#033[00m Nov 28 05:05:44 localhost dnsmasq[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/addn_hosts - 1 addresses Nov 28 05:05:44 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/host Nov 28 05:05:44 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/opts Nov 28 05:05:44 localhost podman[316033]: 2025-11-28 10:05:44.387201363 +0000 UTC m=+0.051342634 container kill fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:44 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:44.388 261346 INFO neutron.agent.dhcp.agent [None req-94f8b550-a2fb-4a61-9fe3-b005871c19a5 - - - - - -] DHCP configuration for ports {'98f82d8b-5603-4803-8ab5-3c3646965543'} is completed#033[00m Nov 28 05:05:44 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:44.667 261346 INFO neutron.agent.dhcp.agent [None req-d756f844-7324-4ba2-b29e-9d22f50691db - - - - - -] DHCP configuration for ports {'2266342c-b2ca-4cdd-98a9-2015ce48326f'} is completed#033[00m Nov 28 05:05:45 localhost systemd[1]: tmp-crun.uVGOZJ.mount: Deactivated successfully. Nov 28 05:05:45 localhost dnsmasq[315360]: exiting on receipt of SIGTERM Nov 28 05:05:45 localhost podman[316071]: 2025-11-28 10:05:45.231658415 +0000 UTC m=+0.064495128 container kill d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:45 localhost systemd[1]: libpod-d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4.scope: Deactivated successfully. Nov 28 05:05:45 localhost podman[316084]: 2025-11-28 10:05:45.296976756 +0000 UTC m=+0.054678876 container died d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:45 localhost podman[316084]: 2025-11-28 10:05:45.379209096 +0000 UTC m=+0.136911176 container cleanup d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:05:45 localhost systemd[1]: libpod-conmon-d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4.scope: Deactivated successfully. Nov 28 05:05:45 localhost podman[316086]: 2025-11-28 10:05:45.398502388 +0000 UTC m=+0.144085967 container remove d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a9e7bee2-adb4-4094-93fb-3cf35621d144, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:05:45 localhost nova_compute[280168]: 2025-11-28 10:05:45.413 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:45 localhost ovn_controller[152726]: 2025-11-28T10:05:45Z|00129|binding|INFO|Releasing lport 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 from this chassis (sb_readonly=0) Nov 28 05:05:45 localhost ovn_controller[152726]: 2025-11-28T10:05:45Z|00130|binding|INFO|Setting lport 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 down in Southbound Nov 28 05:05:45 localhost kernel: device tap6e1d8ae7-e5 left promiscuous mode Nov 28 05:05:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:45.426 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-a9e7bee2-adb4-4094-93fb-3cf35621d144', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9e7bee2-adb4-4094-93fb-3cf35621d144', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79c7da76f5894b10864b69d1961b95ab', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d2b44c93-fed4-4a4f-bdce-cdd61f440a29, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=6e1d8ae7-e505-45e5-a475-e8b27c1cbd76) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:45.428 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1d8ae7-e505-45e5-a475-e8b27c1cbd76 in datapath a9e7bee2-adb4-4094-93fb-3cf35621d144 unbound from our chassis#033[00m Nov 28 05:05:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:45.429 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a9e7bee2-adb4-4094-93fb-3cf35621d144 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:45.431 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce95c36-0bdf-4ae1-87cb-40e83bd4a44f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:45 localhost nova_compute[280168]: 2025-11-28 10:05:45.434 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:45 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:45.536 2 INFO neutron.agent.securitygroups_rpc [None req-6e211784-e61c-41e0-bb4d-96a8d1625a59 e0f29aacf6a94315b178b4a16e3fd03d 79185418333d4a93b24c87e39a4a1847 - - default default] Security group member updated ['f7d47ffa-9780-427b-aaf2-f0de3a638f8a']#033[00m Nov 28 05:05:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:45.618 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:44Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=d906e92f-2a14-419f-b7c7-dfd2b9d95ebe, ip_allocation=immediate, mac_address=fa:16:3e:e4:43:f0, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1072293803, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:05:39Z, description=, dns_domain=, id=19e7252b-db63-489c-9a93-8026377ebe8c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1227717976, port_security_enabled=True, project_id=79185418333d4a93b24c87e39a4a1847, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25537, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2036, status=ACTIVE, subnets=['e7013afe-a363-4f80-93ea-b1f16b2d44b3'], tags=[], tenant_id=79185418333d4a93b24c87e39a4a1847, updated_at=2025-11-28T10:05:41Z, vlan_transparent=None, network_id=19e7252b-db63-489c-9a93-8026377ebe8c, port_security_enabled=True, project_id=79185418333d4a93b24c87e39a4a1847, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f7d47ffa-9780-427b-aaf2-f0de3a638f8a'], standard_attr_id=2079, status=DOWN, tags=[], tenant_id=79185418333d4a93b24c87e39a4a1847, updated_at=2025-11-28T10:05:45Z on network 19e7252b-db63-489c-9a93-8026377ebe8c#033[00m Nov 28 05:05:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:45.637 261346 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Nov 28 05:05:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:45.637 261346 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Nov 28 05:05:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:45.638 261346 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Nov 28 05:05:45 localhost nova_compute[280168]: 2025-11-28 10:05:45.698 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:45.700 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:45.702 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:05:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 2.5 KiB/s wr, 84 op/s Nov 28 05:05:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:45.806 261346 INFO neutron.agent.dhcp.agent [None req-dc669399-b00a-4161-8a03-77bb1e8c43e0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:45 localhost dnsmasq[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/addn_hosts - 2 addresses Nov 28 05:05:45 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/host Nov 28 05:05:45 localhost podman[316130]: 2025-11-28 10:05:45.812527677 +0000 UTC m=+0.059901557 container kill fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:05:45 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/opts Nov 28 05:05:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:45.973 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:46.026 261346 INFO neutron.agent.dhcp.agent [None req-fb96c901-fad4-4460-ae3e-474889f96a78 - - - - - -] DHCP configuration for ports {'d906e92f-2a14-419f-b7c7-dfd2b9d95ebe'} is completed#033[00m Nov 28 05:05:46 localhost systemd[1]: var-lib-containers-storage-overlay-5b06168b56e6b283e5d79caea478a11e9296a7b7a714e7c8158e200f6a95c8f4-merged.mount: Deactivated successfully. Nov 28 05:05:46 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5e1c3cb72577915b7d71f2368eb31dc1c34546039766c4ed24cd04c7c5823d4-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:46 localhost systemd[1]: run-netns-qdhcp\x2da9e7bee2\x2dadb4\x2d4094\x2d93fb\x2d3cf35621d144.mount: Deactivated successfully. Nov 28 05:05:46 localhost nova_compute[280168]: 2025-11-28 10:05:46.180 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:46 localhost nova_compute[280168]: 2025-11-28 10:05:46.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:46 localhost nova_compute[280168]: 2025-11-28 10:05:46.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:05:46 localhost nova_compute[280168]: 2025-11-28 10:05:46.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:05:46 localhost nova_compute[280168]: 2025-11-28 10:05:46.254 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:05:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:46.684 2 INFO neutron.agent.securitygroups_rpc [None req-fda551bc-bdc3-478a-93b4-a620175c516e 8d30c732fa674cae8e1de9092f58edd9 3a67d7f32f5e49c3aed3e09278dd6c95 - - default default] Security group member updated ['371ca172-3d0a-4f94-811c-7c823124cef1']#033[00m Nov 28 05:05:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:47 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:47.122 2 INFO neutron.agent.securitygroups_rpc [None req-e42eed40-5a16-457a-8c1b-352e3dbcff3e e0f29aacf6a94315b178b4a16e3fd03d 79185418333d4a93b24c87e39a4a1847 - - default default] Security group member updated ['f7d47ffa-9780-427b-aaf2-f0de3a638f8a']#033[00m Nov 28 05:05:47 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:47.164 2 INFO neutron.agent.securitygroups_rpc [None req-fda551bc-bdc3-478a-93b4-a620175c516e 8d30c732fa674cae8e1de9092f58edd9 3a67d7f32f5e49c3aed3e09278dd6c95 - - default default] Security group member updated ['371ca172-3d0a-4f94-811c-7c823124cef1']#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e153 e153: 6 total, 6 up, 6 in Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.267 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.267 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.268 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.268 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.269 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.558 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:47 localhost dnsmasq[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/addn_hosts - 1 addresses Nov 28 05:05:47 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/host Nov 28 05:05:47 localhost podman[316188]: 2025-11-28 10:05:47.666395164 +0000 UTC m=+0.070938505 container kill fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:47 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/opts Nov 28 05:05:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:05:47 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/692503189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.736 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:05:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 921 B/s wr, 29 op/s Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.912 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.913 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11556MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.913 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:05:47 localhost nova_compute[280168]: 2025-11-28 10:05:47.913 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:05:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:47.965 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:05:43Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[, , ], fixed_ips=[], id=2266342c-b2ca-4cdd-98a9-2015ce48326f, ip_allocation=immediate, mac_address=fa:16:3e:42:cc:c5, name=tempest-new-port-name-1521843568, network_id=19e7252b-db63-489c-9a93-8026377ebe8c, port_security_enabled=True, project_id=79185418333d4a93b24c87e39a4a1847, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['f7d47ffa-9780-427b-aaf2-f0de3a638f8a'], standard_attr_id=2065, status=DOWN, tags=[], tenant_id=79185418333d4a93b24c87e39a4a1847, updated_at=2025-11-28T10:05:47Z on network 19e7252b-db63-489c-9a93-8026377ebe8c#033[00m Nov 28 05:05:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:47.979 261346 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions#033[00m Nov 28 05:05:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:47.980 261346 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions#033[00m Nov 28 05:05:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:47.980 261346 INFO neutron.agent.linux.dhcp [-] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions#033[00m Nov 28 05:05:48 localhost dnsmasq[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/addn_hosts - 1 addresses Nov 28 05:05:48 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/host Nov 28 05:05:48 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/opts Nov 28 05:05:48 localhost podman[316229]: 2025-11-28 10:05:48.121314646 +0000 UTC m=+0.044607568 container kill fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.190 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.191 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.287 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:05:48 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:48.349 2 INFO neutron.agent.securitygroups_rpc [None req-7a28cb1b-8d30-40dd-8ac7-551ea3c1b87c e0f29aacf6a94315b178b4a16e3fd03d 79185418333d4a93b24c87e39a4a1847 - - default default] Security group member updated ['f7d47ffa-9780-427b-aaf2-f0de3a638f8a']#033[00m Nov 28 05:05:48 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:48.376 261346 INFO neutron.agent.dhcp.agent [None req-93a0b611-3b28-46d3-86a4-ec7d6c12c009 - - - - - -] DHCP configuration for ports {'2266342c-b2ca-4cdd-98a9-2015ce48326f'} is completed#033[00m Nov 28 05:05:48 localhost dnsmasq[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/addn_hosts - 0 addresses Nov 28 05:05:48 localhost podman[316284]: 2025-11-28 10:05:48.541656709 +0000 UTC m=+0.055291665 container kill fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 28 05:05:48 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/host Nov 28 05:05:48 localhost dnsmasq-dhcp[316013]: read /var/lib/neutron/dhcp/19e7252b-db63-489c-9a93-8026377ebe8c/opts Nov 28 05:05:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:05:48 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1333398748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.740 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.747 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.762 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.765 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.765 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:05:48 localhost nova_compute[280168]: 2025-11-28 10:05:48.808 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:49 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:49.000 2 INFO neutron.agent.securitygroups_rpc [None req-0fb2cb74-ebe1-4595-9145-49dcf60353ff 8d30c732fa674cae8e1de9092f58edd9 3a67d7f32f5e49c3aed3e09278dd6c95 - - default default] Security group member updated ['371ca172-3d0a-4f94-811c-7c823124cef1']#033[00m Nov 28 05:05:49 localhost dnsmasq[316013]: exiting on receipt of SIGTERM Nov 28 05:05:49 localhost podman[316323]: 2025-11-28 10:05:49.018344678 +0000 UTC m=+0.060012280 container kill fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:05:49 localhost systemd[1]: libpod-fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18.scope: Deactivated successfully. Nov 28 05:05:49 localhost podman[316335]: 2025-11-28 10:05:49.091126189 +0000 UTC m=+0.058034769 container died fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:05:49 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:49 localhost podman[316335]: 2025-11-28 10:05:49.127294177 +0000 UTC m=+0.094203077 container cleanup fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 05:05:49 localhost systemd[1]: libpod-conmon-fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18.scope: Deactivated successfully. Nov 28 05:05:49 localhost podman[316337]: 2025-11-28 10:05:49.168673496 +0000 UTC m=+0.128317904 container remove fc1c2e6c8e588cd0ca5bbf1694bcef69f472745fa13bbc4a9abc9b4ba2d19e18 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-19e7252b-db63-489c-9a93-8026377ebe8c, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:05:49 localhost nova_compute[280168]: 2025-11-28 10:05:49.179 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:49 localhost ovn_controller[152726]: 2025-11-28T10:05:49Z|00131|binding|INFO|Releasing lport da869e93-67dd-47bf-bfed-34a8468836ea from this chassis (sb_readonly=0) Nov 28 05:05:49 localhost kernel: device tapda869e93-67 left promiscuous mode Nov 28 05:05:49 localhost ovn_controller[152726]: 2025-11-28T10:05:49Z|00132|binding|INFO|Setting lport da869e93-67dd-47bf-bfed-34a8468836ea down in Southbound Nov 28 05:05:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:49.191 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-19e7252b-db63-489c-9a93-8026377ebe8c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-19e7252b-db63-489c-9a93-8026377ebe8c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79185418333d4a93b24c87e39a4a1847', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c54a0e-746c-4aa6-aa68-60e31c1cb041, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=da869e93-67dd-47bf-bfed-34a8468836ea) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:49.193 158530 INFO neutron.agent.ovn.metadata.agent [-] Port da869e93-67dd-47bf-bfed-34a8468836ea in datapath 19e7252b-db63-489c-9a93-8026377ebe8c unbound from our chassis#033[00m Nov 28 05:05:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:49.194 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 19e7252b-db63-489c-9a93-8026377ebe8c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:05:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:49.196 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[620d4250-43f4-4571-8476-ee8dfd7061eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:49 localhost nova_compute[280168]: 2025-11-28 10:05:49.200 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e154 e154: 6 total, 6 up, 6 in Nov 28 05:05:49 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:49.399 2 INFO neutron.agent.securitygroups_rpc [None req-a6f5628a-dc81-40c9-a579-68393376dce2 8d30c732fa674cae8e1de9092f58edd9 3a67d7f32f5e49c3aed3e09278dd6c95 - - default default] Security group member updated ['371ca172-3d0a-4f94-811c-7c823124cef1']#033[00m Nov 28 05:05:49 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:49.417 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:49 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:49.643 261346 INFO neutron.agent.dhcp.agent [None req-60053e95-c58a-45b1-9c1d-d82a22fba80a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:49 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:49.644 261346 INFO neutron.agent.dhcp.agent [None req-60053e95-c58a-45b1-9c1d-d82a22fba80a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:49 localhost dnsmasq[315260]: exiting on receipt of SIGTERM Nov 28 05:05:49 localhost podman[316383]: 2025-11-28 10:05:49.726262955 +0000 UTC m=+0.065495328 container kill b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:49 localhost systemd[1]: libpod-b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4.scope: Deactivated successfully. Nov 28 05:05:49 localhost nova_compute[280168]: 2025-11-28 10:05:49.762 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v297: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 511 B/s wr, 1 op/s Nov 28 05:05:49 localhost podman[316397]: 2025-11-28 10:05:49.79721152 +0000 UTC m=+0.057629278 container died b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:49 localhost podman[316397]: 2025-11-28 10:05:49.823304989 +0000 UTC m=+0.083722647 container cleanup b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:05:49 localhost systemd[1]: libpod-conmon-b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4.scope: Deactivated successfully. Nov 28 05:05:49 localhost podman[316399]: 2025-11-28 10:05:49.881552785 +0000 UTC m=+0.132897075 container remove b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76551b5f-5d3c-486b-8256-6697e6d961af, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay-997025a36efb1e50d9ffe44c2974a8638074eb37fef508bc2be0cc49df956a21-merged.mount: Deactivated successfully. Nov 28 05:05:50 localhost systemd[1]: run-netns-qdhcp\x2d19e7252b\x2ddb63\x2d489c\x2d9a93\x2d8026377ebe8c.mount: Deactivated successfully. Nov 28 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay-f596bc5e7c28d7a22b37b15d00f7baa6c61a1b94d6dc8c55a4dae31f2f97828d-merged.mount: Deactivated successfully. Nov 28 05:05:50 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b1786ca741ddada675528e8264626c275f0e00eead8f1394ae47def187e20ad4-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:50 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:50.149 261346 INFO neutron.agent.dhcp.agent [None req-5670840b-f6bb-4d1d-bf62-8fafb33cdb50 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:50 localhost systemd[1]: run-netns-qdhcp\x2d76551b5f\x2d5d3c\x2d486b\x2d8256\x2d6697e6d961af.mount: Deactivated successfully. Nov 28 05:05:50 localhost nova_compute[280168]: 2025-11-28 10:05:50.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:50 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:50.293 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:50 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:50.461 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:50.848 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:05:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:50.849 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:05:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:50.849 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:05:51 localhost nova_compute[280168]: 2025-11-28 10:05:51.111 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:51 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:51.276 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.2 KiB/s wr, 55 op/s Nov 28 05:05:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:52 localhost nova_compute[280168]: 2025-11-28 10:05:52.581 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:52.704 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:05:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v299: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.2 KiB/s wr, 55 op/s Nov 28 05:05:53 localhost nova_compute[280168]: 2025-11-28 10:05:53.837 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:54 localhost nova_compute[280168]: 2025-11-28 10:05:54.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:05:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:05:54 localhost podman[316427]: 2025-11-28 10:05:54.985274574 +0000 UTC m=+0.081987564 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, name=ubi9-minimal, container_name=openstack_network_exporter) Nov 28 05:05:54 localhost podman[316427]: 2025-11-28 10:05:54.99884205 +0000 UTC m=+0.095555030 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64) Nov 28 05:05:55 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:05:55 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e155 e155: 6 total, 6 up, 6 in Nov 28 05:05:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:55.395 261346 INFO neutron.agent.linux.ip_lib [None req-da59e66f-1e4b-44f4-a073-460cabe89314 - - - - - -] Device tap241102ed-e9 cannot be used as it has no MAC address#033[00m Nov 28 05:05:55 localhost nova_compute[280168]: 2025-11-28 10:05:55.419 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:55 localhost kernel: device tap241102ed-e9 entered promiscuous mode Nov 28 05:05:55 localhost NetworkManager[5965]: [1764324355.4273] manager: (tap241102ed-e9): new Generic device (/org/freedesktop/NetworkManager/Devices/31) Nov 28 05:05:55 localhost nova_compute[280168]: 2025-11-28 10:05:55.429 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:55 localhost ovn_controller[152726]: 2025-11-28T10:05:55Z|00133|binding|INFO|Claiming lport 241102ed-e98f-481c-af36-a58d0b82a130 for this chassis. Nov 28 05:05:55 localhost ovn_controller[152726]: 2025-11-28T10:05:55Z|00134|binding|INFO|241102ed-e98f-481c-af36-a58d0b82a130: Claiming unknown Nov 28 05:05:55 localhost systemd-udevd[316457]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:05:55 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:55.439 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a67d7f32f5e49c3aed3e09278dd6c95', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3e102bc-7fe3-462e-8d76-cae98ee17de5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=241102ed-e98f-481c-af36-a58d0b82a130) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:55 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:55.441 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 241102ed-e98f-481c-af36-a58d0b82a130 in datapath 3a59b0d3-6d6c-43cf-8506-00d3024e1dd5 bound to our chassis#033[00m Nov 28 05:05:55 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:55.443 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port c7eb739b-fca4-4c4a-bcd6-eddbc69e6fb0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:05:55 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:55.444 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:05:55 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:55.444 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c64018e0-3abc-4bde-bb05-14c12f5079a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:55 localhost ovn_controller[152726]: 2025-11-28T10:05:55Z|00135|binding|INFO|Setting lport 241102ed-e98f-481c-af36-a58d0b82a130 ovn-installed in OVS Nov 28 05:05:55 localhost ovn_controller[152726]: 2025-11-28T10:05:55Z|00136|binding|INFO|Setting lport 241102ed-e98f-481c-af36-a58d0b82a130 up in Southbound Nov 28 05:05:55 localhost nova_compute[280168]: 2025-11-28 10:05:55.479 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:55 localhost nova_compute[280168]: 2025-11-28 10:05:55.515 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:55 localhost nova_compute[280168]: 2025-11-28 10:05:55.542 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 1.7 KiB/s wr, 54 op/s Nov 28 05:05:56 localhost podman[316512]: Nov 28 05:05:56 localhost podman[316512]: 2025-11-28 10:05:56.486436552 +0000 UTC m=+0.089008349 container create 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:56 localhost systemd[1]: Started libpod-conmon-8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf.scope. Nov 28 05:05:56 localhost podman[316512]: 2025-11-28 10:05:56.442514766 +0000 UTC m=+0.045086623 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:05:56 localhost systemd[1]: Started libcrun container. Nov 28 05:05:56 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c626be514af0ce8e815152f86e5ad34ffc6952c7e54fd8d57a5b98c9ea4b60f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:05:56 localhost podman[316512]: 2025-11-28 10:05:56.569980642 +0000 UTC m=+0.172552439 container init 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:05:56 localhost podman[316512]: 2025-11-28 10:05:56.578196714 +0000 UTC m=+0.180768511 container start 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:05:56 localhost dnsmasq[316529]: started, version 2.85 cachesize 150 Nov 28 05:05:56 localhost dnsmasq[316529]: DNS service limited to local subnets Nov 28 05:05:56 localhost dnsmasq[316529]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:05:56 localhost dnsmasq[316529]: warning: no upstream servers configured Nov 28 05:05:56 localhost dnsmasq-dhcp[316529]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:05:56 localhost dnsmasq[316529]: read /var/lib/neutron/dhcp/3a59b0d3-6d6c-43cf-8506-00d3024e1dd5/addn_hosts - 0 addresses Nov 28 05:05:56 localhost dnsmasq-dhcp[316529]: read /var/lib/neutron/dhcp/3a59b0d3-6d6c-43cf-8506-00d3024e1dd5/host Nov 28 05:05:56 localhost dnsmasq-dhcp[316529]: read /var/lib/neutron/dhcp/3a59b0d3-6d6c-43cf-8506-00d3024e1dd5/opts Nov 28 05:05:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:05:56 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:56.795 261346 INFO neutron.agent.dhcp.agent [None req-1def1e25-c684-4f63-8aa1-117fef23afc0 - - - - - -] DHCP configuration for ports {'4abc60f3-e168-4dec-9d99-99bf38eae7a0'} is completed#033[00m Nov 28 05:05:57 localhost openstack_network_exporter[240973]: ERROR 10:05:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:05:57 localhost openstack_network_exporter[240973]: ERROR 10:05:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:05:57 localhost openstack_network_exporter[240973]: ERROR 10:05:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:05:57 localhost openstack_network_exporter[240973]: ERROR 10:05:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:05:57 localhost openstack_network_exporter[240973]: Nov 28 05:05:57 localhost openstack_network_exporter[240973]: ERROR 10:05:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:05:57 localhost openstack_network_exporter[240973]: Nov 28 05:05:57 localhost nova_compute[280168]: 2025-11-28 10:05:57.618 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v302: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 KiB/s wr, 52 op/s Nov 28 05:05:58 localhost podman[316547]: 2025-11-28 10:05:58.246849006 +0000 UTC m=+0.054100809 container kill 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:05:58 localhost dnsmasq[316529]: exiting on receipt of SIGTERM Nov 28 05:05:58 localhost systemd[1]: libpod-8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf.scope: Deactivated successfully. Nov 28 05:05:58 localhost podman[316559]: 2025-11-28 10:05:58.303018698 +0000 UTC m=+0.043785964 container died 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:58 localhost podman[316559]: 2025-11-28 10:05:58.332872742 +0000 UTC m=+0.073639938 container cleanup 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:05:58 localhost systemd[1]: libpod-conmon-8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf.scope: Deactivated successfully. Nov 28 05:05:58 localhost podman[316561]: 2025-11-28 10:05:58.369807575 +0000 UTC m=+0.106314689 container remove 8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 05:05:58 localhost nova_compute[280168]: 2025-11-28 10:05:58.382 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:58 localhost ovn_controller[152726]: 2025-11-28T10:05:58Z|00137|binding|INFO|Releasing lport 241102ed-e98f-481c-af36-a58d0b82a130 from this chassis (sb_readonly=0) Nov 28 05:05:58 localhost kernel: device tap241102ed-e9 left promiscuous mode Nov 28 05:05:58 localhost ovn_controller[152726]: 2025-11-28T10:05:58Z|00138|binding|INFO|Setting lport 241102ed-e98f-481c-af36-a58d0b82a130 down in Southbound Nov 28 05:05:58 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:58.391 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3a59b0d3-6d6c-43cf-8506-00d3024e1dd5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a67d7f32f5e49c3aed3e09278dd6c95', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3e102bc-7fe3-462e-8d76-cae98ee17de5, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=241102ed-e98f-481c-af36-a58d0b82a130) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:05:58 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:58.393 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 241102ed-e98f-481c-af36-a58d0b82a130 in datapath 3a59b0d3-6d6c-43cf-8506-00d3024e1dd5 unbound from our chassis#033[00m Nov 28 05:05:58 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:58.396 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3a59b0d3-6d6c-43cf-8506-00d3024e1dd5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:05:58 localhost ovn_metadata_agent[158525]: 2025-11-28 10:05:58.397 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[9eee3137-4672-4663-b0f0-d3f50d532006]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:05:58 localhost nova_compute[280168]: 2025-11-28 10:05:58.414 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:58 localhost nova_compute[280168]: 2025-11-28 10:05:58.416 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:05:58 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1783542772' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:05:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:05:58 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1783542772' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:05:58 localhost systemd[1]: tmp-crun.0VqOhB.mount: Deactivated successfully. Nov 28 05:05:58 localhost systemd[1]: var-lib-containers-storage-overlay-c626be514af0ce8e815152f86e5ad34ffc6952c7e54fd8d57a5b98c9ea4b60f6-merged.mount: Deactivated successfully. Nov 28 05:05:58 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ce55d06dc884b3adea11f4b138a43364413ea7f9c45256bc2b458e88b37eecf-userdata-shm.mount: Deactivated successfully. Nov 28 05:05:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:05:58.692 261346 INFO neutron.agent.dhcp.agent [None req-4c3bf9c5-dfb1-4524-b422-cffea4019160 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:05:58 localhost systemd[1]: run-netns-qdhcp\x2d3a59b0d3\x2d6d6c\x2d43cf\x2d8506\x2d00d3024e1dd5.mount: Deactivated successfully. Nov 28 05:05:58 localhost nova_compute[280168]: 2025-11-28 10:05:58.870 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:05:58 localhost podman[239012]: time="2025-11-28T10:05:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:05:58 localhost podman[239012]: @ - - [28/Nov/2025:10:05:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:05:58 localhost podman[239012]: @ - - [28/Nov/2025:10:05:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19212 "" "Go-http-client/1.1" Nov 28 05:05:59 localhost neutron_sriov_agent[254415]: 2025-11-28 10:05:59.715 2 INFO neutron.agent.securitygroups_rpc [None req-eb7cbd19-2bf0-4a29-9cd5-8cfb0b13af7b 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:05:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 1.8 KiB/s wr, 44 op/s Nov 28 05:05:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:05:59 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2237297014' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:05:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:05:59 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2237297014' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:00 localhost nova_compute[280168]: 2025-11-28 10:06:00.261 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:00 localhost nova_compute[280168]: 2025-11-28 10:06:00.261 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 05:06:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e156 e156: 6 total, 6 up, 6 in Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:06:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:06:01 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:01.206 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e157 e157: 6 total, 6 up, 6 in Nov 28 05:06:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v306: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 3.2 MiB/s wr, 125 op/s Nov 28 05:06:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:01 localhost nova_compute[280168]: 2025-11-28 10:06:01.881 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e158 e158: 6 total, 6 up, 6 in Nov 28 05:06:02 localhost nova_compute[280168]: 2025-11-28 10:06:02.622 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:02 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:02.695 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:02.862 2 INFO neutron.agent.securitygroups_rpc [None req-e67102ba-fe9d-44cc-8c94-469bc136f71c 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:03.020 2 INFO neutron.agent.securitygroups_rpc [None req-e67102ba-fe9d-44cc-8c94-469bc136f71c 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.171 261346 INFO neutron.agent.dhcp.agent [None req-694fd91a-974f-4128-82f1-92e341d4f45d - - - - - -] Synchronizing state#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.305 261346 INFO neutron.agent.dhcp.agent [None req-b04f7734-15e3-4a1f-ac37-0c3cc54f42ff - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.307 261346 INFO neutron.agent.dhcp.agent [-] Starting network 42de65c3-1b93-417a-9a72-a70c0a174963 dhcp configuration#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.307 261346 INFO neutron.agent.dhcp.agent [-] Finished network 42de65c3-1b93-417a-9a72-a70c0a174963 dhcp configuration#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.308 261346 INFO neutron.agent.dhcp.agent [-] Starting network 744b5a82-3c5c-4b41-ba44-527244a209c4 dhcp configuration#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.308 261346 INFO neutron.agent.dhcp.agent [-] Finished network 744b5a82-3c5c-4b41-ba44-527244a209c4 dhcp configuration#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.308 261346 INFO neutron.agent.dhcp.agent [-] Starting network 91fe7ebb-e32d-453f-9285-23228e6cf776 dhcp configuration#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.309 261346 INFO neutron.agent.dhcp.agent [-] Finished network 91fe7ebb-e32d-453f-9285-23228e6cf776 dhcp configuration#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.309 261346 INFO neutron.agent.dhcp.agent [None req-b04f7734-15e3-4a1f-ac37-0c3cc54f42ff - - - - - -] Synchronizing state complete#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.310 261346 INFO neutron.agent.dhcp.agent [None req-c62c5ab5-d92f-4c9b-95cd-e42bc27bc915 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e159 e159: 6 total, 6 up, 6 in Nov 28 05:06:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:03.627 2 INFO neutron.agent.securitygroups_rpc [None req-06fb4107-5237-41cb-aedf-7ec837b7d0c6 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.653 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v309: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 136 KiB/s rd, 5.3 MiB/s wr, 206 op/s Nov 28 05:06:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:03.854 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:03 localhost nova_compute[280168]: 2025-11-28 10:06:03.911 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:04.294 2 INFO neutron.agent.securitygroups_rpc [None req-f2f37d61-bf24-49e1-9d4d-20b98269f559 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:04.346 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:04.844 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:06:04 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:06:04 localhost systemd[1]: tmp-crun.UfgAs0.mount: Deactivated successfully. Nov 28 05:06:04 localhost podman[316587]: 2025-11-28 10:06:04.983471922 +0000 UTC m=+0.086491781 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 05:06:04 localhost podman[316587]: 2025-11-28 10:06:04.996859873 +0000 UTC m=+0.099879732 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible) Nov 28 05:06:05 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:06:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:06:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1537040644' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:06:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:06:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1537040644' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:05 localhost podman[316589]: 2025-11-28 10:06:05.08485759 +0000 UTC m=+0.179103930 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:05 localhost podman[316589]: 2025-11-28 10:06:05.093419363 +0000 UTC m=+0.187665713 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 05:06:05 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:06:05 localhost podman[316595]: 2025-11-28 10:06:05.057963226 +0000 UTC m=+0.146998976 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:06:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e160 e160: 6 total, 6 up, 6 in Nov 28 05:06:05 localhost podman[316588]: 2025-11-28 10:06:05.143134556 +0000 UTC m=+0.240781690 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 05:06:05 localhost podman[316595]: 2025-11-28 10:06:05.192573451 +0000 UTC m=+0.281609261 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:06:05 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:06:05 localhost podman[316588]: 2025-11-28 10:06:05.225521181 +0000 UTC m=+0.323168305 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:05 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:06:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:06:05 Nov 28 05:06:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:06:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:06:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'manila_data', 'backups', 'images', '.mgr', 'vms'] Nov 28 05:06:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:06:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:06:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:06:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:06:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:06:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:06:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:06:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v311: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 5.0 MiB/s wr, 194 op/s Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003266113553880514 quantized to 32 (current 32) Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:06:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16) Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:06:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:06:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:07 localhost nova_compute[280168]: 2025-11-28 10:06:07.655 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 2.5 KiB/s wr, 50 op/s Nov 28 05:06:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:06:08 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3328946498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:06:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:06:08 localhost podman[316673]: 2025-11-28 10:06:08.877940582 +0000 UTC m=+0.092029892 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:06:08 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:08.879 261346 INFO neutron.agent.linux.ip_lib [None req-b352c162-d9d7-4ff7-bb47-6368ce0a595e - - - - - -] Device tap0eb8bf5c-38 cannot be used as it has no MAC address#033[00m Nov 28 05:06:08 localhost podman[316673]: 2025-11-28 10:06:08.890731243 +0000 UTC m=+0.104820503 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:06:08 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:06:08 localhost nova_compute[280168]: 2025-11-28 10:06:08.908 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:08 localhost kernel: device tap0eb8bf5c-38 entered promiscuous mode Nov 28 05:06:08 localhost NetworkManager[5965]: [1764324368.9184] manager: (tap0eb8bf5c-38): new Generic device (/org/freedesktop/NetworkManager/Devices/32) Nov 28 05:06:08 localhost ovn_controller[152726]: 2025-11-28T10:06:08Z|00139|binding|INFO|Claiming lport 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 for this chassis. Nov 28 05:06:08 localhost ovn_controller[152726]: 2025-11-28T10:06:08Z|00140|binding|INFO|0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6: Claiming unknown Nov 28 05:06:08 localhost nova_compute[280168]: 2025-11-28 10:06:08.922 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:08 localhost systemd-udevd[316704]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:06:08 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:08.933 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-b2c4ac07-8851-40d3-9495-d0489b67c4c3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c4ac07-8851-40d3-9495-d0489b67c4c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3c0d1ce8d854a7b9ffc953e88cd2c44', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=940d6739-e1d9-4dcd-a724-785ba886c2af, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:08 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:08.935 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 in datapath b2c4ac07-8851-40d3-9495-d0489b67c4c3 bound to our chassis#033[00m Nov 28 05:06:08 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:08.938 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port 58641ed8-9be5-4200-85c3-ac1f139da10b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:06:08 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:08.939 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c4ac07-8851-40d3-9495-d0489b67c4c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:08 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:08.940 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[b1add40e-5b98-49c3-ab79-6ee062113ceb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost ovn_controller[152726]: 2025-11-28T10:06:08Z|00141|binding|INFO|Setting lport 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 ovn-installed in OVS Nov 28 05:06:08 localhost ovn_controller[152726]: 2025-11-28T10:06:08Z|00142|binding|INFO|Setting lport 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 up in Southbound Nov 28 05:06:08 localhost nova_compute[280168]: 2025-11-28 10:06:08.958 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost journal[228057]: ethtool ioctl error on tap0eb8bf5c-38: No such device Nov 28 05:06:08 localhost nova_compute[280168]: 2025-11-28 10:06:08.997 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:09 localhost nova_compute[280168]: 2025-11-28 10:06:09.033 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:09 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:09.171 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:08Z, description=, device_id=0c7c1c6e-1c46-40f5-85d2-30567725a06d, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=eda15d4b-6e20-425d-85eb-2095daa7da9b, ip_allocation=immediate, mac_address=fa:16:3e:6c:31:2b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2240, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:06:08Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:06:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e161 e161: 6 total, 6 up, 6 in Nov 28 05:06:09 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:09.395 2 INFO neutron.agent.securitygroups_rpc [None req-88638c5c-0267-4c55-b76f-e8bedf4b6799 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:09 localhost systemd[1]: tmp-crun.HZTDMN.mount: Deactivated successfully. Nov 28 05:06:09 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:06:09 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:06:09 localhost podman[316757]: 2025-11-28 10:06:09.407935175 +0000 UTC m=+0.069175431 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:06:09 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:06:09 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:09.739 261346 INFO neutron.agent.dhcp.agent [None req-0ef17ad3-b4f0-45a4-9f72-bd0ee6e421f7 - - - - - -] DHCP configuration for ports {'eda15d4b-6e20-425d-85eb-2095daa7da9b'} is completed#033[00m Nov 28 05:06:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 49 op/s Nov 28 05:06:09 localhost podman[316811]: Nov 28 05:06:09 localhost podman[316811]: 2025-11-28 10:06:09.971139476 +0000 UTC m=+0.081942282 container create a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 05:06:10 localhost systemd[1]: Started libpod-conmon-a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1.scope. Nov 28 05:06:10 localhost systemd[1]: tmp-crun.yBOGfI.mount: Deactivated successfully. Nov 28 05:06:10 localhost systemd[1]: Started libcrun container. Nov 28 05:06:10 localhost podman[316811]: 2025-11-28 10:06:09.934536974 +0000 UTC m=+0.045339810 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:06:10 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a112523a391658be219e8ca2b94928afac8124141e68c4a75a8a0c64ca4d98f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:06:10 localhost podman[316811]: 2025-11-28 10:06:10.04857967 +0000 UTC m=+0.159382476 container init a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:06:10 localhost podman[316811]: 2025-11-28 10:06:10.059819054 +0000 UTC m=+0.170621860 container start a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:10 localhost dnsmasq[316830]: started, version 2.85 cachesize 150 Nov 28 05:06:10 localhost dnsmasq[316830]: DNS service limited to local subnets Nov 28 05:06:10 localhost dnsmasq[316830]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:06:10 localhost dnsmasq[316830]: warning: no upstream servers configured Nov 28 05:06:10 localhost dnsmasq-dhcp[316830]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:06:10 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 0 addresses Nov 28 05:06:10 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:06:10 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:06:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:10.079 2 INFO neutron.agent.securitygroups_rpc [None req-58c4368b-992c-4171-8380-96e11b260575 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:10 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:10.189 261346 INFO neutron.agent.dhcp.agent [None req-34377480-17b1-454e-befc-c8d8416758ad - - - - - -] DHCP configuration for ports {'f9c3bbdf-3272-4132-8f82-3ca63ccc1570'} is completed#033[00m Nov 28 05:06:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:06:10 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/144526477' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:06:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:06:10 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/144526477' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:10.914 2 INFO neutron.agent.securitygroups_rpc [None req-dbbfa92a-6f48-4219-96e5-450182918d3d e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:11.003 2 INFO neutron.agent.securitygroups_rpc [None req-dbbfa92a-6f48-4219-96e5-450182918d3d e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e162 e162: 6 total, 6 up, 6 in Nov 28 05:06:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:11.325 2 INFO neutron.agent.securitygroups_rpc [None req-a2df05ee-0a35-442d-aa18-eb9de4dda01c e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:11 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:11.343 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:11.785 2 INFO neutron.agent.securitygroups_rpc [None req-cbfd0cad-e26e-4d58-9458-36c5ead2083c 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 117 KiB/s rd, 5.9 KiB/s wr, 157 op/s Nov 28 05:06:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:11.845 2 INFO neutron.agent.securitygroups_rpc [None req-841a5df4-1f75-4611-8939-235283ca6a97 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:11 localhost nova_compute[280168]: 2025-11-28 10:06:11.864 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:06:11 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:11.882 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:11 localhost podman[316831]: 2025-11-28 10:06:11.979577351 +0000 UTC m=+0.087893245 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:12 localhost podman[316831]: 2025-11-28 10:06:12.021727253 +0000 UTC m=+0.130043107 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 28 05:06:12 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:06:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e163 e163: 6 total, 6 up, 6 in Nov 28 05:06:12 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:12.378 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:12 localhost nova_compute[280168]: 2025-11-28 10:06:12.694 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 93 KiB/s rd, 4.0 KiB/s wr, 124 op/s Nov 28 05:06:13 localhost nova_compute[280168]: 2025-11-28 10:06:13.960 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:14 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:14.064 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:13Z, description=, device_id=0c7c1c6e-1c46-40f5-85d2-30567725a06d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e7862df1-355e-4f97-9168-e87770babee6, ip_allocation=immediate, mac_address=fa:16:3e:40:b9:f1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:04Z, description=, dns_domain=, id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1354719988-network, port_security_enabled=True, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64958, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2202, status=ACTIVE, subnets=['f4b4dc5d-f654-46e4-8ff2-bd52eff10306'], tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:06:06Z, vlan_transparent=None, network_id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, port_security_enabled=False, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2267, status=DOWN, tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:06:13Z on network b2c4ac07-8851-40d3-9495-d0489b67c4c3#033[00m Nov 28 05:06:14 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 1 addresses Nov 28 05:06:14 localhost podman[316867]: 2025-11-28 10:06:14.268658667 +0000 UTC m=+0.057684639 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:06:14 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:06:14 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:06:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e164 e164: 6 total, 6 up, 6 in Nov 28 05:06:14 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:14.647 261346 INFO neutron.agent.dhcp.agent [None req-3eaedd66-05ec-4a99-8df9-df7c0f22a0c5 - - - - - -] DHCP configuration for ports {'e7862df1-355e-4f97-9168-e87770babee6'} is completed#033[00m Nov 28 05:06:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:06:14 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3736797209' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:06:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:06:14 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3736797209' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:15 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:15.017 2 INFO neutron.agent.securitygroups_rpc [None req-e90d3b0d-d250-425f-9007-eccb585011b0 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e165 e165: 6 total, 6 up, 6 in Nov 28 05:06:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 145 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 120 KiB/s rd, 5.2 KiB/s wr, 160 op/s Nov 28 05:06:16 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:15.999 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:15Z, description=, device_id=5b45f823-0eca-4648-936e-96781a85013b, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=951fee01-fb0c-4d4d-b345-0c23456eb150, ip_allocation=immediate, mac_address=fa:16:3e:95:f7:27, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2278, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:06:15Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:06:16 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:16.028 2 INFO neutron.agent.securitygroups_rpc [None req-3771b2dc-83c7-4322-8cb3-68ef5f3840bb e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:16 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:06:16 localhost podman[316904]: 2025-11-28 10:06:16.251249531 +0000 UTC m=+0.061069873 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 28 05:06:16 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:06:16 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:06:16 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:16.511 261346 INFO neutron.agent.dhcp.agent [None req-b559ac96-b038-4740-a953-bf001a3a52b2 - - - - - -] DHCP configuration for ports {'951fee01-fb0c-4d4d-b345-0c23456eb150'} is completed#033[00m Nov 28 05:06:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:17.155 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:13Z, description=, device_id=0c7c1c6e-1c46-40f5-85d2-30567725a06d, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e7862df1-355e-4f97-9168-e87770babee6, ip_allocation=immediate, mac_address=fa:16:3e:40:b9:f1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:04Z, description=, dns_domain=, id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1354719988-network, port_security_enabled=True, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64958, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2202, status=ACTIVE, subnets=['f4b4dc5d-f654-46e4-8ff2-bd52eff10306'], tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:06:06Z, vlan_transparent=None, network_id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, port_security_enabled=False, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2267, status=DOWN, tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:06:13Z on network b2c4ac07-8851-40d3-9495-d0489b67c4c3#033[00m Nov 28 05:06:17 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 1 addresses Nov 28 05:06:17 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:06:17 localhost systemd[1]: tmp-crun.RgNVpV.mount: Deactivated successfully. Nov 28 05:06:17 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:06:17 localhost podman[316939]: 2025-11-28 10:06:17.396004135 +0000 UTC m=+0.062960740 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:06:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:17.672 261346 INFO neutron.agent.dhcp.agent [None req-82077205-b7ef-4c5a-bfd5-7f3b6fc5d19b - - - - - -] DHCP configuration for ports {'e7862df1-355e-4f97-9168-e87770babee6'} is completed#033[00m Nov 28 05:06:17 localhost nova_compute[280168]: 2025-11-28 10:06:17.696 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v322: 177 pgs: 177 active+clean; 145 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.5 KiB/s wr, 65 op/s Nov 28 05:06:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e166 e166: 6 total, 6 up, 6 in Nov 28 05:06:18 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:18.785 261346 INFO neutron.agent.linux.ip_lib [None req-76fba609-54ed-4c5d-8243-74b30a1b7e43 - - - - - -] Device tapa14f394f-d8 cannot be used as it has no MAC address#033[00m Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.842 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost kernel: device tapa14f394f-d8 entered promiscuous mode Nov 28 05:06:18 localhost NetworkManager[5965]: [1764324378.8513] manager: (tapa14f394f-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/33) Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.850 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost ovn_controller[152726]: 2025-11-28T10:06:18Z|00143|binding|INFO|Claiming lport a14f394f-d877-4b96-b490-240cee4c17a9 for this chassis. Nov 28 05:06:18 localhost ovn_controller[152726]: 2025-11-28T10:06:18Z|00144|binding|INFO|a14f394f-d877-4b96-b490-240cee4c17a9: Claiming unknown Nov 28 05:06:18 localhost systemd-udevd[316969]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:06:18 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:18.863 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe07eabe-b5fa-45d0-9144-395b300628e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a14f394f-d877-4b96-b490-240cee4c17a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:18 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:18.865 158530 INFO neutron.agent.ovn.metadata.agent [-] Port a14f394f-d877-4b96-b490-240cee4c17a9 in datapath 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4 bound to our chassis#033[00m Nov 28 05:06:18 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:18.867 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:06:18 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:18.868 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[3a29e55a-d24a-42e2-ae3b-3f81fe38bed0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.885 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost ovn_controller[152726]: 2025-11-28T10:06:18Z|00145|binding|INFO|Setting lport a14f394f-d877-4b96-b490-240cee4c17a9 ovn-installed in OVS Nov 28 05:06:18 localhost ovn_controller[152726]: 2025-11-28T10:06:18Z|00146|binding|INFO|Setting lport a14f394f-d877-4b96-b490-240cee4c17a9 up in Southbound Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.891 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.893 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost journal[228057]: ethtool ioctl error on tapa14f394f-d8: No such device Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.921 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.945 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:18 localhost nova_compute[280168]: 2025-11-28 10:06:18.962 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e167 e167: 6 total, 6 up, 6 in Nov 28 05:06:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 145 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 2.7 KiB/s wr, 72 op/s Nov 28 05:06:19 localhost podman[317041]: Nov 28 05:06:19 localhost podman[317041]: 2025-11-28 10:06:19.981184676 +0000 UTC m=+0.093917259 container create 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:20 localhost systemd[1]: Started libpod-conmon-57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8.scope. Nov 28 05:06:20 localhost podman[317041]: 2025-11-28 10:06:19.935795255 +0000 UTC m=+0.048527868 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:06:20 localhost systemd[1]: tmp-crun.GlTCmu.mount: Deactivated successfully. Nov 28 05:06:20 localhost systemd[1]: Started libcrun container. Nov 28 05:06:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da9ef93f4d18acab10afc53cca5f72b138d7269b0830746f92d68d65b0e6d6ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:06:20 localhost podman[317041]: 2025-11-28 10:06:20.085955708 +0000 UTC m=+0.198688251 container init 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:20 localhost podman[317041]: 2025-11-28 10:06:20.09681483 +0000 UTC m=+0.209547403 container start 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 28 05:06:20 localhost dnsmasq[317059]: started, version 2.85 cachesize 150 Nov 28 05:06:20 localhost dnsmasq[317059]: DNS service limited to local subnets Nov 28 05:06:20 localhost dnsmasq[317059]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:06:20 localhost dnsmasq[317059]: warning: no upstream servers configured Nov 28 05:06:20 localhost dnsmasq-dhcp[317059]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:06:20 localhost dnsmasq[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/addn_hosts - 0 addresses Nov 28 05:06:20 localhost dnsmasq-dhcp[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/host Nov 28 05:06:20 localhost dnsmasq-dhcp[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/opts Nov 28 05:06:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e168 e168: 6 total, 6 up, 6 in Nov 28 05:06:20 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:20.309 261346 INFO neutron.agent.dhcp.agent [None req-1c87ab88-3398-451a-9add-22742c15a1f1 - - - - - -] DHCP configuration for ports {'d6e76183-1869-4503-9c79-f3f08e79db56'} is completed#033[00m Nov 28 05:06:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e169 e169: 6 total, 6 up, 6 in Nov 28 05:06:21 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:21.643 261346 INFO neutron.agent.linux.ip_lib [None req-a88592ad-e761-4ea4-80f7-40a9f3361f28 - - - - - -] Device tapd03b0310-2c cannot be used as it has no MAC address#033[00m Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.663 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost kernel: device tapd03b0310-2c entered promiscuous mode Nov 28 05:06:21 localhost NetworkManager[5965]: [1764324381.6722] manager: (tapd03b0310-2c): new Generic device (/org/freedesktop/NetworkManager/Devices/34) Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.673 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost ovn_controller[152726]: 2025-11-28T10:06:21Z|00147|binding|INFO|Claiming lport d03b0310-2c7a-49f5-a26a-a0b7e61df97d for this chassis. Nov 28 05:06:21 localhost ovn_controller[152726]: 2025-11-28T10:06:21Z|00148|binding|INFO|d03b0310-2c7a-49f5-a26a-a0b7e61df97d: Claiming unknown Nov 28 05:06:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:21.687 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-844c2297-3cba-435a-8224-a5874c8fc772', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844c2297-3cba-435a-8224-a5874c8fc772', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8499932c523b4e26933fff84403e296e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fcf2c61-5d0b-45c2-9bee-03c636787a2e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d03b0310-2c7a-49f5-a26a-a0b7e61df97d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:21.690 158530 INFO neutron.agent.ovn.metadata.agent [-] Port d03b0310-2c7a-49f5-a26a-a0b7e61df97d in datapath 844c2297-3cba-435a-8224-a5874c8fc772 bound to our chassis#033[00m Nov 28 05:06:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:21.692 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 844c2297-3cba-435a-8224-a5874c8fc772 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:06:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:21.693 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[48e45ea5-a971-4f9d-b208-d208bbf3f997]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.711 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost ovn_controller[152726]: 2025-11-28T10:06:21Z|00149|binding|INFO|Setting lport d03b0310-2c7a-49f5-a26a-a0b7e61df97d ovn-installed in OVS Nov 28 05:06:21 localhost ovn_controller[152726]: 2025-11-28T10:06:21Z|00150|binding|INFO|Setting lport d03b0310-2c7a-49f5-a26a-a0b7e61df97d up in Southbound Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.721 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.723 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.754 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:21.768 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:21Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=2cd09136-ad69-4fac-87fd-7f802c66ea93, ip_allocation=immediate, mac_address=fa:16:3e:35:91:e0, name=tempest-PortsTestJSON-1134682245, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:16Z, description=, dns_domain=, id=8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-241150779, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28617, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2281, status=ACTIVE, subnets=['7e1cf227-5201-4b25-a503-c669fcd1fd8a'], tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:06:17Z, vlan_transparent=None, network_id=8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2309, status=DOWN, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:06:21Z on network 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4#033[00m Nov 28 05:06:21 localhost nova_compute[280168]: 2025-11-28 10:06:21.784 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 145 MiB data, 811 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 4.7 KiB/s wr, 100 op/s Nov 28 05:06:22 localhost dnsmasq[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/addn_hosts - 1 addresses Nov 28 05:06:22 localhost podman[317101]: 2025-11-28 10:06:22.045172324 +0000 UTC m=+0.063664662 container kill 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:22 localhost dnsmasq-dhcp[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/host Nov 28 05:06:22 localhost dnsmasq-dhcp[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/opts Nov 28 05:06:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:22.363 261346 INFO neutron.agent.dhcp.agent [None req-06129683-9dfb-429c-a9ff-ea718def59e4 - - - - - -] DHCP configuration for ports {'2cd09136-ad69-4fac-87fd-7f802c66ea93'} is completed#033[00m Nov 28 05:06:22 localhost podman[317173]: Nov 28 05:06:22 localhost podman[317173]: 2025-11-28 10:06:22.699304482 +0000 UTC m=+0.092549088 container create e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:06:22 localhost nova_compute[280168]: 2025-11-28 10:06:22.743 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:22 localhost podman[317173]: 2025-11-28 10:06:22.653458217 +0000 UTC m=+0.046702863 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:06:22 localhost systemd[1]: Started libpod-conmon-e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc.scope. Nov 28 05:06:22 localhost systemd[1]: Started libcrun container. Nov 28 05:06:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7af7e398dae285d45d069130170b8c66994cef9726e98fc0beb2e1c05467b9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:06:22 localhost podman[317173]: 2025-11-28 10:06:22.801125763 +0000 UTC m=+0.194370379 container init e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:06:22 localhost podman[317173]: 2025-11-28 10:06:22.812216763 +0000 UTC m=+0.205461419 container start e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:06:22 localhost dnsmasq[317213]: started, version 2.85 cachesize 150 Nov 28 05:06:22 localhost dnsmasq[317213]: DNS service limited to local subnets Nov 28 05:06:22 localhost dnsmasq[317213]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:06:22 localhost dnsmasq[317213]: warning: no upstream servers configured Nov 28 05:06:22 localhost dnsmasq-dhcp[317213]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:06:22 localhost dnsmasq[317213]: read /var/lib/neutron/dhcp/844c2297-3cba-435a-8224-a5874c8fc772/addn_hosts - 0 addresses Nov 28 05:06:22 localhost dnsmasq-dhcp[317213]: read /var/lib/neutron/dhcp/844c2297-3cba-435a-8224-a5874c8fc772/host Nov 28 05:06:22 localhost dnsmasq-dhcp[317213]: read /var/lib/neutron/dhcp/844c2297-3cba-435a-8224-a5874c8fc772/opts Nov 28 05:06:22 localhost podman[317230]: 2025-11-28 10:06:22.986149254 +0000 UTC m=+0.064712494 container kill 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:22 localhost dnsmasq[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/addn_hosts - 0 addresses Nov 28 05:06:22 localhost dnsmasq-dhcp[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/host Nov 28 05:06:22 localhost dnsmasq-dhcp[317059]: read /var/lib/neutron/dhcp/8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4/opts Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.036 261346 INFO neutron.agent.dhcp.agent [None req-7a188e46-e70e-464f-b2c6-b19fdc568a1f - - - - - -] DHCP configuration for ports {'ec1f1e87-c34a-4cb6-8e4b-7a7c76325fc7'} is completed#033[00m Nov 28 05:06:23 localhost ovn_controller[152726]: 2025-11-28T10:06:23Z|00151|binding|INFO|Releasing lport d03b0310-2c7a-49f5-a26a-a0b7e61df97d from this chassis (sb_readonly=0) Nov 28 05:06:23 localhost kernel: device tapd03b0310-2c left promiscuous mode Nov 28 05:06:23 localhost ovn_controller[152726]: 2025-11-28T10:06:23Z|00152|binding|INFO|Setting lport d03b0310-2c7a-49f5-a26a-a0b7e61df97d down in Southbound Nov 28 05:06:23 localhost nova_compute[280168]: 2025-11-28 10:06:23.179 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:23 localhost nova_compute[280168]: 2025-11-28 10:06:23.198 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:23.205 158530 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 754f3c1f-ac2e-4666-b728-9ee5f97abccb with type ""#033[00m Nov 28 05:06:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:23.206 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-844c2297-3cba-435a-8224-a5874c8fc772', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-844c2297-3cba-435a-8224-a5874c8fc772', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8499932c523b4e26933fff84403e296e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fcf2c61-5d0b-45c2-9bee-03c636787a2e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=d03b0310-2c7a-49f5-a26a-a0b7e61df97d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:23.207 158530 INFO neutron.agent.ovn.metadata.agent [-] Port d03b0310-2c7a-49f5-a26a-a0b7e61df97d in datapath 844c2297-3cba-435a-8224-a5874c8fc772 unbound from our chassis#033[00m Nov 28 05:06:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:23.209 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 844c2297-3cba-435a-8224-a5874c8fc772, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:23 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:23.209 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[acb9f861-6669-4add-a543-8e47811af8e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e170 e170: 6 total, 6 up, 6 in Nov 28 05:06:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:06:23 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:06:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:06:23 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:06:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:06:23 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 4b129d6e-9a4e-4fce-a15d-8ce31e0d01a4 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:06:23 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 4b129d6e-9a4e-4fce-a15d-8ce31e0d01a4 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:06:23 localhost ceph-mgr[286188]: [progress INFO root] Completed event 4b129d6e-9a4e-4fce-a15d-8ce31e0d01a4 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:06:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:06:23 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:06:23 localhost nova_compute[280168]: 2025-11-28 10:06:23.566 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:23 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:23.639 2 INFO neutron.agent.securitygroups_rpc [None req-5b881bab-382f-42c7-b1b6-bde09ef38c32 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:23 localhost dnsmasq[317213]: read /var/lib/neutron/dhcp/844c2297-3cba-435a-8224-a5874c8fc772/addn_hosts - 0 addresses Nov 28 05:06:23 localhost dnsmasq-dhcp[317213]: read /var/lib/neutron/dhcp/844c2297-3cba-435a-8224-a5874c8fc772/host Nov 28 05:06:23 localhost dnsmasq-dhcp[317213]: read /var/lib/neutron/dhcp/844c2297-3cba-435a-8224-a5874c8fc772/opts Nov 28 05:06:23 localhost podman[317302]: 2025-11-28 10:06:23.759593008 +0000 UTC m=+0.062120714 container kill e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent [None req-83103f8b-35de-43cd-9e96-7816cc1f876c - - - - - -] Unable to reload_allocations dhcp for 844c2297-3cba-435a-8224-a5874c8fc772.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapd03b0310-2c not found in namespace qdhcp-844c2297-3cba-435a-8224-a5874c8fc772. Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent return fut.result() Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent return self.__get_result() Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent raise self._exception Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapd03b0310-2c not found in namespace qdhcp-844c2297-3cba-435a-8224-a5874c8fc772. Nov 28 05:06:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:23.785 261346 ERROR neutron.agent.dhcp.agent #033[00m Nov 28 05:06:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 145 MiB data, 811 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.3 KiB/s wr, 91 op/s Nov 28 05:06:23 localhost nova_compute[280168]: 2025-11-28 10:06:23.928 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:23 localhost nova_compute[280168]: 2025-11-28 10:06:23.965 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:24 localhost podman[317351]: 2025-11-28 10:06:24.131110705 +0000 UTC m=+0.067733748 container kill 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:06:24 localhost dnsmasq[317059]: exiting on receipt of SIGTERM Nov 28 05:06:24 localhost systemd[1]: libpod-57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8.scope: Deactivated successfully. Nov 28 05:06:24 localhost podman[317364]: 2025-11-28 10:06:24.21840977 +0000 UTC m=+0.067115428 container died 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:06:24 localhost systemd[1]: tmp-crun.5PvZRw.mount: Deactivated successfully. Nov 28 05:06:24 localhost podman[317364]: 2025-11-28 10:06:24.259423457 +0000 UTC m=+0.108129055 container cleanup 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:06:24 localhost systemd[1]: libpod-conmon-57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8.scope: Deactivated successfully. Nov 28 05:06:24 localhost podman[317365]: 2025-11-28 10:06:24.294163152 +0000 UTC m=+0.137564927 container remove 57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:06:24 localhost ovn_controller[152726]: 2025-11-28T10:06:24Z|00153|binding|INFO|Releasing lport a14f394f-d877-4b96-b490-240cee4c17a9 from this chassis (sb_readonly=0) Nov 28 05:06:24 localhost kernel: device tapa14f394f-d8 left promiscuous mode Nov 28 05:06:24 localhost ovn_controller[152726]: 2025-11-28T10:06:24Z|00154|binding|INFO|Setting lport a14f394f-d877-4b96-b490-240cee4c17a9 down in Southbound Nov 28 05:06:24 localhost nova_compute[280168]: 2025-11-28 10:06:24.311 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:24.325 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe07eabe-b5fa-45d0-9144-395b300628e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=a14f394f-d877-4b96-b490-240cee4c17a9) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:24 localhost nova_compute[280168]: 2025-11-28 10:06:24.329 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:24.331 158530 INFO neutron.agent.ovn.metadata.agent [-] Port a14f394f-d877-4b96-b490-240cee4c17a9 in datapath 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4 unbound from our chassis#033[00m Nov 28 05:06:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:24.334 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:24 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:24.336 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[d53f6ad2-251d-4f25-9d30-c44646e4b251]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:24 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:06:24 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:06:24 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:24.517 2 INFO neutron.agent.securitygroups_rpc [None req-043214bd-8f49-4013-b76d-a6a4f382dbad e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:24 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:24.521 261346 INFO neutron.agent.dhcp.agent [None req-b04f7734-15e3-4a1f-ac37-0c3cc54f42ff - - - - - -] Synchronizing state#033[00m Nov 28 05:06:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e171 e171: 6 total, 6 up, 6 in Nov 28 05:06:24 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:24.732 261346 INFO neutron.agent.dhcp.agent [None req-dffd8019-a902-4d9f-b9a5-2d63c26e7b70 - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 28 05:06:24 localhost systemd[1]: var-lib-containers-storage-overlay-da9ef93f4d18acab10afc53cca5f72b138d7269b0830746f92d68d65b0e6d6ba-merged.mount: Deactivated successfully. Nov 28 05:06:24 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-57bdded9be6509522a89034674ed2eb843af9f2d8f84f0925466760b8a7d72a8-userdata-shm.mount: Deactivated successfully. Nov 28 05:06:24 localhost systemd[1]: run-netns-qdhcp\x2d8b55fbca\x2d6e7b\x2d41f3\x2dbf9f\x2dcfd8b31211d4.mount: Deactivated successfully. Nov 28 05:06:24 localhost dnsmasq[317213]: exiting on receipt of SIGTERM Nov 28 05:06:24 localhost podman[317407]: 2025-11-28 10:06:24.931378421 +0000 UTC m=+0.070182062 container kill e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:24 localhost systemd[1]: libpod-e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc.scope: Deactivated successfully. Nov 28 05:06:24 localhost podman[317419]: 2025-11-28 10:06:24.997124197 +0000 UTC m=+0.053796131 container died e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:06:25 localhost podman[317419]: 2025-11-28 10:06:25.028674883 +0000 UTC m=+0.085346817 container cleanup e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:06:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:06:25 localhost systemd[1]: libpod-conmon-e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc.scope: Deactivated successfully. Nov 28 05:06:25 localhost podman[317426]: 2025-11-28 10:06:25.084912497 +0000 UTC m=+0.126228120 container remove e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-844c2297-3cba-435a-8224-a5874c8fc772, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3) Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.116 261346 INFO neutron.agent.dhcp.agent [-] Starting network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb dhcp configuration#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.117 261346 INFO neutron.agent.dhcp.agent [-] Finished network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb dhcp configuration#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.117 261346 INFO neutron.agent.dhcp.agent [-] Starting network 744b5a82-3c5c-4b41-ba44-527244a209c4 dhcp configuration#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.118 261346 INFO neutron.agent.dhcp.agent [-] Finished network 744b5a82-3c5c-4b41-ba44-527244a209c4 dhcp configuration#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.118 261346 INFO neutron.agent.dhcp.agent [-] Starting network 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4 dhcp configuration#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.118 261346 INFO neutron.agent.dhcp.agent [-] Finished network 8b55fbca-6e7b-41f3-bf9f-cfd8b31211d4 dhcp configuration#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.119 261346 INFO neutron.agent.dhcp.agent [None req-ba59b665-bbba-47f6-9efa-63a243c41137 - - - - - -] Synchronizing state complete#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.120 261346 INFO neutron.agent.dhcp.agent [None req-1f11cdf6-59fe-4b85-8008-6ece80439f46 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.121 261346 INFO neutron.agent.dhcp.agent [None req-1f11cdf6-59fe-4b85-8008-6ece80439f46 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.121 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:25 localhost podman[317445]: 2025-11-28 10:06:25.134470896 +0000 UTC m=+0.090814365 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 05:06:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:25.135 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e172 e172: 6 total, 6 up, 6 in Nov 28 05:06:25 localhost podman[317445]: 2025-11-28 10:06:25.16854139 +0000 UTC m=+0.124884899 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible) Nov 28 05:06:25 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:06:25 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:25.422 2 INFO neutron.agent.securitygroups_rpc [None req-dd385026-e816-420f-a351-7652f9735a8b e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:25 localhost systemd[1]: var-lib-containers-storage-overlay-d7af7e398dae285d45d069130170b8c66994cef9726e98fc0beb2e1c05467b9a-merged.mount: Deactivated successfully. Nov 28 05:06:25 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e030229e49106417441f6cec18f25534071f1d52c1203835ba91718b7398cacc-userdata-shm.mount: Deactivated successfully. Nov 28 05:06:25 localhost systemd[1]: run-netns-qdhcp\x2d844c2297\x2d3cba\x2d435a\x2d8224\x2da5874c8fc772.mount: Deactivated successfully. Nov 28 05:06:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 145 MiB data, 811 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 4.1 KiB/s wr, 86 op/s Nov 28 05:06:25 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:06:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:06:26 localhost nova_compute[280168]: 2025-11-28 10:06:26.032 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e173 e173: 6 total, 6 up, 6 in Nov 28 05:06:26 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:26.504 2 INFO neutron.agent.securitygroups_rpc [None req-1c752cf9-f4dc-4e37-b8ac-f785450dc01f 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:26 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:26.656 2 INFO neutron.agent.securitygroups_rpc [None req-0d9bcbd8-d3ad-4d76-88b4-dcdc400ccf8b e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 05:06:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2277 writes, 22K keys, 2277 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2277 writes, 2277 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2277 writes, 22K keys, 2277 commit groups, 1.0 writes per commit group, ingest: 36.97 MB, 0.06 MB/s#012Interval WAL: 2277 writes, 2277 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 117.8 0.21 0.06 7 0.030 0 0 0.0 0.0#012 L6 1/0 15.86 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 4.1 149.4 137.0 0.74 0.28 6 0.124 75K 2889 0.0 0.0#012 Sum 1/0 15.86 MB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 5.1 116.2 132.8 0.96 0.34 13 0.074 75K 2889 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 5.1 116.5 133.1 0.96 0.34 12 0.080 75K 2889 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.1 0.0 0.1 0.1 0.0 0.0 0.0 149.4 137.0 0.74 0.28 6 0.124 75K 2889 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 119.0 0.21 0.06 6 0.035 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.024, interval 0.024#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.12 GB write, 0.21 MB/s write, 0.11 GB read, 0.19 MB/s read, 1.0 seconds#012Interval compaction: 0.12 GB write, 0.21 MB/s write, 0.11 GB read, 0.19 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561ae0707350#2 capacity: 308.00 MB usage: 10.60 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000114 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(535,10.08 MB,3.27364%) FilterBlock(13,233.98 KB,0.0741884%) IndexBlock(13,298.30 KB,0.0945797%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 28 05:06:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:26.705 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:26 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:26.740 2 INFO neutron.agent.securitygroups_rpc [None req-2fa3b339-652f-411d-b715-041869496ad1 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:26.752 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:26 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:06:27 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:27.061 2 INFO neutron.agent.securitygroups_rpc [None req-c1a7ab1c-728b-412f-8d01-654c720e39af 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:27 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:27.257 2 INFO neutron.agent.securitygroups_rpc [None req-35d7c929-00df-4969-aae8-f80246e7ea1b e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:27 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:27.573 2 INFO neutron.agent.securitygroups_rpc [None req-c6801d78-6b56-496a-b37f-67c7bc7e4554 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:27 localhost openstack_network_exporter[240973]: ERROR 10:06:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:06:27 localhost openstack_network_exporter[240973]: ERROR 10:06:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:06:27 localhost openstack_network_exporter[240973]: ERROR 10:06:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:06:27 localhost openstack_network_exporter[240973]: ERROR 10:06:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:06:27 localhost openstack_network_exporter[240973]: Nov 28 05:06:27 localhost openstack_network_exporter[240973]: ERROR 10:06:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:06:27 localhost openstack_network_exporter[240973]: Nov 28 05:06:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:27.625 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:27 localhost nova_compute[280168]: 2025-11-28 10:06:27.749 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 175 MiB data, 819 MiB used, 41 GiB / 42 GiB avail; 4.9 MiB/s rd, 2.0 MiB/s wr, 203 op/s Nov 28 05:06:27 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:27.833 2 INFO neutron.agent.securitygroups_rpc [None req-70e4eaf3-c859-433d-b609-e53f73e65383 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:27.856 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e174 e174: 6 total, 6 up, 6 in Nov 28 05:06:28 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:28.324 2 INFO neutron.agent.securitygroups_rpc [None req-8150f43d-22f5-4d4c-88f9-cca24e00dc84 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:28 localhost podman[239012]: time="2025-11-28T10:06:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:06:28 localhost podman[239012]: @ - - [28/Nov/2025:10:06:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158154 "" "Go-http-client/1.1" Nov 28 05:06:28 localhost nova_compute[280168]: 2025-11-28 10:06:28.982 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:28 localhost podman[239012]: @ - - [28/Nov/2025:10:06:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19671 "" "Go-http-client/1.1" Nov 28 05:06:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e175 e175: 6 total, 6 up, 6 in Nov 28 05:06:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:06:29 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2154361656' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:06:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 175 MiB data, 819 MiB used, 41 GiB / 42 GiB avail; 4.5 MiB/s rd, 1.9 MiB/s wr, 189 op/s Nov 28 05:06:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e176 e176: 6 total, 6 up, 6 in Nov 28 05:06:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:06:30 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/885376607' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:06:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:06:30 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/885376607' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e177 e177: 6 total, 6 up, 6 in Nov 28 05:06:30 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:30.319 2 INFO neutron.agent.securitygroups_rpc [None req-61635fc4-7cf2-4d88-91e1-3ec9d744288e 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:30 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:30.345 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v341: 177 pgs: 177 active+clean; 334 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 32 MiB/s wr, 256 op/s Nov 28 05:06:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e178 e178: 6 total, 6 up, 6 in Nov 28 05:06:32 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:32.269 2 INFO neutron.agent.securitygroups_rpc [None req-b832b7b5-4ef2-4bcf-bb1b-eacd2f3a21fc e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:32 localhost nova_compute[280168]: 2025-11-28 10:06:32.753 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:33 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e179 e179: 6 total, 6 up, 6 in Nov 28 05:06:33 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:33.317 2 INFO neutron.agent.securitygroups_rpc [None req-995edc3e-e8fd-43bb-892b-18b0775677c3 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 334 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 5.5 MiB/s rd, 32 MiB/s wr, 256 op/s Nov 28 05:06:33 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:33.830 2 INFO neutron.agent.securitygroups_rpc [None req-ca5aaf48-8c6f-4efd-a0a9-4566338fc9f9 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:33 localhost nova_compute[280168]: 2025-11-28 10:06:33.986 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:34 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:34.584 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a3:db:09 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d1c7e9a2-1241-45c0-8a7d-a563a8d4e9f3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1c7e9a2-1241-45c0-8a7d-a563a8d4e9f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b879ef3c-9a06-48a8-9e87-0eac0ec86fcf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=6db6a620-dcc3-4cb5-ab27-f70881c20730) old=Port_Binding(mac=['fa:16:3e:a3:db:09 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-d1c7e9a2-1241-45c0-8a7d-a563a8d4e9f3', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1c7e9a2-1241-45c0-8a7d-a563a8d4e9f3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:34 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:34.586 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 6db6a620-dcc3-4cb5-ab27-f70881c20730 in datapath d1c7e9a2-1241-45c0-8a7d-a563a8d4e9f3 updated#033[00m Nov 28 05:06:34 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:34.588 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d1c7e9a2-1241-45c0-8a7d-a563a8d4e9f3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:34 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:34.589 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f50453-d186-4f22-970b-dae020d0567a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:34 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:34.616 2 INFO neutron.agent.securitygroups_rpc [None req-c85e903a-7c43-4db3-84d5-12d5f3b5c956 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e180 e180: 6 total, 6 up, 6 in Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0. Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.410804) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324395410865, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1972, "num_deletes": 266, "total_data_size": 2677974, "memory_usage": 2724232, "flush_reason": "Manual Compaction"} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324395425376, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1738394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21054, "largest_seqno": 23021, "table_properties": {"data_size": 1730746, "index_size": 4541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16990, "raw_average_key_size": 20, "raw_value_size": 1714900, "raw_average_value_size": 2099, "num_data_blocks": 197, "num_entries": 817, "num_filter_entries": 817, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324288, "oldest_key_time": 1764324288, "file_creation_time": 1764324395, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14628 microseconds, and 5396 cpu microseconds. Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.425431) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1738394 bytes OK Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.425457) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.427721) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.427744) EVENT_LOG_v1 {"time_micros": 1764324395427738, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.427766) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2668824, prev total WAL file size 2668824, number of live WAL files 2. Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.428572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303231' seq:72057594037927935, type:22 .. '6C6F676D0034323735' seq:0, type:0; will stop at (end) Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1697KB)], [30(15MB)] Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324395428628, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 18373680, "oldest_snapshot_seqno": -1} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12779 keys, 17853968 bytes, temperature: kUnknown Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324395557928, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17853968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17779239, "index_size": 41668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32005, "raw_key_size": 341760, "raw_average_key_size": 26, "raw_value_size": 17560025, "raw_average_value_size": 1374, "num_data_blocks": 1585, "num_entries": 12779, "num_filter_entries": 12779, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324395, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.558387) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17853968 bytes Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.560916) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 137.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 15.9 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(20.8) write-amplify(10.3) OK, records in: 13326, records dropped: 547 output_compression: NoCompression Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.560956) EVENT_LOG_v1 {"time_micros": 1764324395560937, "job": 16, "event": "compaction_finished", "compaction_time_micros": 129519, "compaction_time_cpu_micros": 51438, "output_level": 6, "num_output_files": 1, "total_output_size": 17853968, "num_input_records": 13326, "num_output_records": 12779, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324395561528, "job": 16, "event": "table_file_deletion", "file_number": 32} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324395564622, "job": 16, "event": "table_file_deletion", "file_number": 30} Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.428497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.564733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.564741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.564744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.564747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:06:35 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:06:35.564750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:06:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:06:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:06:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:06:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:06:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:06:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:06:35 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:35.797 2 INFO neutron.agent.securitygroups_rpc [None req-8c0c18c9-b8b0-46ef-89c5-259d43510268 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 334 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 23 MiB/s wr, 181 op/s Nov 28 05:06:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:06:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:06:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:06:35 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:06:36 localhost podman[317469]: 2025-11-28 10:06:36.010769741 +0000 UTC m=+0.104988381 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm) Nov 28 05:06:36 localhost podman[317471]: 2025-11-28 10:06:36.048554188 +0000 UTC m=+0.136428752 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:06:36 localhost podman[317471]: 2025-11-28 10:06:36.0521768 +0000 UTC m=+0.140051374 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:06:36 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:06:36 localhost podman[317469]: 2025-11-28 10:06:36.072371133 +0000 UTC m=+0.166589773 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}) Nov 28 05:06:36 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:06:36 localhost podman[317470]: 2025-11-28 10:06:36.148593206 +0000 UTC m=+0.240150504 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:06:36 localhost podman[317472]: 2025-11-28 10:06:36.21446482 +0000 UTC m=+0.299886749 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:06:36 localhost podman[317472]: 2025-11-28 10:06:36.223733045 +0000 UTC m=+0.309154964 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:06:36 localhost podman[317470]: 2025-11-28 10:06:36.234669113 +0000 UTC m=+0.326226471 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Nov 28 05:06:36 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:06:36 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:06:36 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:36.608 2 INFO neutron.agent.securitygroups_rpc [None req-9cc8449a-c364-4646-9e4e-de66c7fab687 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408 Nov 28 05:06:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:37.098 261346 INFO neutron.agent.linux.ip_lib [None req-17ddab61-d1bc-422c-b5c6-fddc90895032 - - - - - -] Device tap8223ff8a-d8 cannot be used as it has no MAC address#033[00m Nov 28 05:06:37 localhost nova_compute[280168]: 2025-11-28 10:06:37.123 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:37 localhost kernel: device tap8223ff8a-d8 entered promiscuous mode Nov 28 05:06:37 localhost NetworkManager[5965]: [1764324397.1326] manager: (tap8223ff8a-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/35) Nov 28 05:06:37 localhost ovn_controller[152726]: 2025-11-28T10:06:37Z|00155|binding|INFO|Claiming lport 8223ff8a-d864-4e3d-8c9a-3a343402a080 for this chassis. Nov 28 05:06:37 localhost ovn_controller[152726]: 2025-11-28T10:06:37Z|00156|binding|INFO|8223ff8a-d864-4e3d-8c9a-3a343402a080: Claiming unknown Nov 28 05:06:37 localhost nova_compute[280168]: 2025-11-28 10:06:37.134 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:37 localhost systemd-udevd[317561]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:06:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:37.146 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-af6f0c29-9935-4fbd-b4e8-6bde23565f73', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af6f0c29-9935-4fbd-b4e8-6bde23565f73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8499932c523b4e26933fff84403e296e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=200f951a-6408-4a85-8a6d-abf60f0f240f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8223ff8a-d864-4e3d-8c9a-3a343402a080) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:37.148 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8223ff8a-d864-4e3d-8c9a-3a343402a080 in datapath af6f0c29-9935-4fbd-b4e8-6bde23565f73 bound to our chassis#033[00m Nov 28 05:06:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:37.150 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network af6f0c29-9935-4fbd-b4e8-6bde23565f73 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:06:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:37.151 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[63f90fc4-e98b-4e55-b097-f2d0abb85e1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost ovn_controller[152726]: 2025-11-28T10:06:37Z|00157|binding|INFO|Setting lport 8223ff8a-d864-4e3d-8c9a-3a343402a080 ovn-installed in OVS Nov 28 05:06:37 localhost ovn_controller[152726]: 2025-11-28T10:06:37Z|00158|binding|INFO|Setting lport 8223ff8a-d864-4e3d-8c9a-3a343402a080 up in Southbound Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost nova_compute[280168]: 2025-11-28 10:06:37.175 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost journal[228057]: ethtool ioctl error on tap8223ff8a-d8: No such device Nov 28 05:06:37 localhost nova_compute[280168]: 2025-11-28 10:06:37.213 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:37 localhost nova_compute[280168]: 2025-11-28 10:06:37.250 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e181 e181: 6 total, 6 up, 6 in Nov 28 05:06:37 localhost nova_compute[280168]: 2025-11-28 10:06:37.761 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 508 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 3.8 MiB/s rd, 26 MiB/s wr, 195 op/s Nov 28 05:06:38 localhost podman[317632]: Nov 28 05:06:38 localhost podman[317632]: 2025-11-28 10:06:38.16653744 +0000 UTC m=+0.096019486 container create 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:06:38 localhost systemd[1]: Started libpod-conmon-9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51.scope. Nov 28 05:06:38 localhost podman[317632]: 2025-11-28 10:06:38.120583601 +0000 UTC m=+0.050065677 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:06:38 localhost systemd[1]: tmp-crun.01q0lp.mount: Deactivated successfully. Nov 28 05:06:38 localhost systemd[1]: Started libcrun container. Nov 28 05:06:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2663c8f4edf79b99518629935207b6fb933c056adfabba625d01c99d738355ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:06:38 localhost podman[317632]: 2025-11-28 10:06:38.252939857 +0000 UTC m=+0.182421893 container init 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:06:38 localhost podman[317632]: 2025-11-28 10:06:38.264994719 +0000 UTC m=+0.194476795 container start 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:06:38 localhost dnsmasq[317649]: started, version 2.85 cachesize 150 Nov 28 05:06:38 localhost dnsmasq[317649]: DNS service limited to local subnets Nov 28 05:06:38 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:38.268 2 INFO neutron.agent.securitygroups_rpc [None req-f5ad8b43-3120-4741-9a0b-dea18e860a97 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:38 localhost dnsmasq[317649]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:06:38 localhost dnsmasq[317649]: warning: no upstream servers configured Nov 28 05:06:38 localhost dnsmasq-dhcp[317649]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:06:38 localhost dnsmasq[317649]: read /var/lib/neutron/dhcp/af6f0c29-9935-4fbd-b4e8-6bde23565f73/addn_hosts - 0 addresses Nov 28 05:06:38 localhost dnsmasq-dhcp[317649]: read /var/lib/neutron/dhcp/af6f0c29-9935-4fbd-b4e8-6bde23565f73/host Nov 28 05:06:38 localhost dnsmasq-dhcp[317649]: read /var/lib/neutron/dhcp/af6f0c29-9935-4fbd-b4e8-6bde23565f73/opts Nov 28 05:06:38 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:38.382 261346 INFO neutron.agent.dhcp.agent [None req-364bf6f8-f136-4f9a-bdbb-5b6503ee835a - - - - - -] DHCP configuration for ports {'a2264f56-a714-4119-a161-0f64b5e5f509'} is completed#033[00m Nov 28 05:06:38 localhost dnsmasq[317649]: exiting on receipt of SIGTERM Nov 28 05:06:38 localhost podman[317667]: 2025-11-28 10:06:38.672449487 +0000 UTC m=+0.059036774 container kill 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS) Nov 28 05:06:38 localhost systemd[1]: libpod-9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51.scope: Deactivated successfully. Nov 28 05:06:38 localhost podman[317680]: 2025-11-28 10:06:38.74835554 +0000 UTC m=+0.059440236 container died 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:06:38 localhost podman[317680]: 2025-11-28 10:06:38.779318176 +0000 UTC m=+0.090402832 container cleanup 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:06:38 localhost systemd[1]: libpod-conmon-9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51.scope: Deactivated successfully. Nov 28 05:06:38 localhost podman[317682]: 2025-11-28 10:06:38.828714401 +0000 UTC m=+0.134917446 container remove 9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af6f0c29-9935-4fbd-b4e8-6bde23565f73, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 05:06:38 localhost nova_compute[280168]: 2025-11-28 10:06:38.895 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:38 localhost ovn_controller[152726]: 2025-11-28T10:06:38Z|00159|binding|INFO|Releasing lport 8223ff8a-d864-4e3d-8c9a-3a343402a080 from this chassis (sb_readonly=0) Nov 28 05:06:38 localhost kernel: device tap8223ff8a-d8 left promiscuous mode Nov 28 05:06:38 localhost ovn_controller[152726]: 2025-11-28T10:06:38Z|00160|binding|INFO|Setting lport 8223ff8a-d864-4e3d-8c9a-3a343402a080 down in Southbound Nov 28 05:06:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:38.908 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-af6f0c29-9935-4fbd-b4e8-6bde23565f73', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af6f0c29-9935-4fbd-b4e8-6bde23565f73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8499932c523b4e26933fff84403e296e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=200f951a-6408-4a85-8a6d-abf60f0f240f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8223ff8a-d864-4e3d-8c9a-3a343402a080) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:38.910 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8223ff8a-d864-4e3d-8c9a-3a343402a080 in datapath af6f0c29-9935-4fbd-b4e8-6bde23565f73 unbound from our chassis#033[00m Nov 28 05:06:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:38.912 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network af6f0c29-9935-4fbd-b4e8-6bde23565f73 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:06:38 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:38.913 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[71cad528-6272-4d57-9fe0-6e0c8f01dd83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:38 localhost nova_compute[280168]: 2025-11-28 10:06:38.918 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:38 localhost nova_compute[280168]: 2025-11-28 10:06:38.988 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:06:39 localhost systemd[1]: var-lib-containers-storage-overlay-2663c8f4edf79b99518629935207b6fb933c056adfabba625d01c99d738355ca-merged.mount: Deactivated successfully. Nov 28 05:06:39 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9843dcb5f0688513d98e565130cb502bba1e44def357e175bb83adf33a558a51-userdata-shm.mount: Deactivated successfully. Nov 28 05:06:39 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:39.202 2 INFO neutron.agent.securitygroups_rpc [None req-cef696eb-a0d2-4ba5-86dc-1a9cdfd33b8c 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:39 localhost podman[317712]: 2025-11-28 10:06:39.245000782 +0000 UTC m=+0.088468523 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:06:39 localhost systemd[1]: run-netns-qdhcp\x2daf6f0c29\x2d9935\x2d4fbd\x2db4e8\x2d6bde23565f73.mount: Deactivated successfully. Nov 28 05:06:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:39.275 261346 INFO neutron.agent.dhcp.agent [None req-c9a4ee6c-f8d8-4616-bb9c-f10e6ecd9db3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:39.276 261346 INFO neutron.agent.dhcp.agent [None req-c9a4ee6c-f8d8-4616-bb9c-f10e6ecd9db3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:39 localhost podman[317712]: 2025-11-28 10:06:39.281421826 +0000 UTC m=+0.124889557 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:06:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:39.281 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:39 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:06:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e182 e182: 6 total, 6 up, 6 in Nov 28 05:06:39 localhost nova_compute[280168]: 2025-11-28 10:06:39.608 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 508 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 3.7 MiB/s rd, 25 MiB/s wr, 187 op/s Nov 28 05:06:40 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:40.091 2 INFO neutron.agent.securitygroups_rpc [None req-11baa0de-2c9c-4582-9ccd-cefaf494809d e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:06:40 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/859948979' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:06:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:06:40 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/859948979' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:40 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:40.932 2 INFO neutron.agent.securitygroups_rpc [None req-13572f85-aaed-465a-b457-59f9816ff0f0 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e183 e183: 6 total, 6 up, 6 in Nov 28 05:06:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:06:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 174 active+clean; 598 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 3.7 MiB/s rd, 48 MiB/s wr, 315 op/s Nov 28 05:06:42 localhost nova_compute[280168]: 2025-11-28 10:06:42.252 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:42 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:42.353 2 INFO neutron.agent.securitygroups_rpc [None req-38d8b4a3-75dd-41d2-a0db-a9c73ae0e2bb e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e184 e184: 6 total, 6 up, 6 in Nov 28 05:06:42 localhost nova_compute[280168]: 2025-11-28 10:06:42.764 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:42 localhost nova_compute[280168]: 2025-11-28 10:06:42.768 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:42 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:42.819 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:06:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:06:42 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:42.974 2 INFO neutron.agent.securitygroups_rpc [None req-c8fe2171-fc33-407b-b80c-443549ec2e39 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:42 localhost podman[317735]: 2025-11-28 10:06:42.984461508 +0000 UTC m=+0.090464324 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:06:43 localhost podman[317735]: 2025-11-28 10:06:43.031564712 +0000 UTC m=+0.137567588 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:06:43 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:06:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 174 active+clean; 598 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 89 KiB/s rd, 23 MiB/s wr, 128 op/s Nov 28 05:06:43 localhost nova_compute[280168]: 2025-11-28 10:06:43.990 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:44.226 261346 INFO neutron.agent.linux.ip_lib [None req-8f1fb084-d397-4674-90ab-1db345b3e56d - - - - - -] Device tapb856d1b5-24 cannot be used as it has no MAC address#033[00m Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.253 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.258 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:44 localhost kernel: device tapb856d1b5-24 entered promiscuous mode Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.261 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost ovn_controller[152726]: 2025-11-28T10:06:44Z|00161|binding|INFO|Claiming lport b856d1b5-24d8-464e-9e6f-00bfee193aae for this chassis. Nov 28 05:06:44 localhost ovn_controller[152726]: 2025-11-28T10:06:44Z|00162|binding|INFO|b856d1b5-24d8-464e-9e6f-00bfee193aae: Claiming unknown Nov 28 05:06:44 localhost NetworkManager[5965]: [1764324404.2658] manager: (tapb856d1b5-24): new Generic device (/org/freedesktop/NetworkManager/Devices/36) Nov 28 05:06:44 localhost systemd-udevd[317763]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:06:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:44.274 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-c97d9106-1e14-4509-8744-407acebde871', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97d9106-1e14-4509-8744-407acebde871', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8499932c523b4e26933fff84403e296e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2baca3cd-7ce9-43a6-b4e9-ca22706fa29a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b856d1b5-24d8-464e-9e6f-00bfee193aae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:44.276 158530 INFO neutron.agent.ovn.metadata.agent [-] Port b856d1b5-24d8-464e-9e6f-00bfee193aae in datapath c97d9106-1e14-4509-8744-407acebde871 bound to our chassis#033[00m Nov 28 05:06:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:44.279 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port cb21ac71-ba5c-494b-b748-80129c041c9a IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:06:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:44.279 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97d9106-1e14-4509-8744-407acebde871, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:44 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:44.280 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c51f299e-cf28-410d-97ff-f982281995bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.315 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost ovn_controller[152726]: 2025-11-28T10:06:44Z|00163|binding|INFO|Setting lport b856d1b5-24d8-464e-9e6f-00bfee193aae ovn-installed in OVS Nov 28 05:06:44 localhost ovn_controller[152726]: 2025-11-28T10:06:44Z|00164|binding|INFO|Setting lport b856d1b5-24d8-464e-9e6f-00bfee193aae up in Southbound Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.323 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.356 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost nova_compute[280168]: 2025-11-28 10:06:44.386 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:44.715 2 INFO neutron.agent.securitygroups_rpc [None req-f4e28522-faee-4418-9069-18a470f0a6f6 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e185 e185: 6 total, 6 up, 6 in Nov 28 05:06:45 localhost nova_compute[280168]: 2025-11-28 10:06:45.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:45 localhost nova_compute[280168]: 2025-11-28 10:06:45.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:45 localhost nova_compute[280168]: 2025-11-28 10:06:45.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:06:45 localhost podman[317819]: Nov 28 05:06:45 localhost podman[317819]: 2025-11-28 10:06:45.36582624 +0000 UTC m=+0.099173513 container create 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:06:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e186 e186: 6 total, 6 up, 6 in Nov 28 05:06:45 localhost systemd[1]: Started libpod-conmon-90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108.scope. Nov 28 05:06:45 localhost podman[317819]: 2025-11-28 10:06:45.308517891 +0000 UTC m=+0.041865154 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:06:45 localhost ovn_controller[152726]: 2025-11-28T10:06:45Z|00165|binding|INFO|Removing iface tapb856d1b5-24 ovn-installed in OVS Nov 28 05:06:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:45.422 158530 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port cb21ac71-ba5c-494b-b748-80129c041c9a with type ""#033[00m Nov 28 05:06:45 localhost ovn_controller[152726]: 2025-11-28T10:06:45Z|00166|binding|INFO|Removing lport b856d1b5-24d8-464e-9e6f-00bfee193aae ovn-installed in OVS Nov 28 05:06:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:45.425 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-c97d9106-1e14-4509-8744-407acebde871', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c97d9106-1e14-4509-8744-407acebde871', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8499932c523b4e26933fff84403e296e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2baca3cd-7ce9-43a6-b4e9-ca22706fa29a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=b856d1b5-24d8-464e-9e6f-00bfee193aae) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:45 localhost nova_compute[280168]: 2025-11-28 10:06:45.425 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:45 localhost systemd[1]: Started libcrun container. Nov 28 05:06:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:45.427 158530 INFO neutron.agent.ovn.metadata.agent [-] Port b856d1b5-24d8-464e-9e6f-00bfee193aae in datapath c97d9106-1e14-4509-8744-407acebde871 unbound from our chassis#033[00m Nov 28 05:06:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:45.430 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c97d9106-1e14-4509-8744-407acebde871, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:45 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:45.431 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[f394d95f-5d41-4c26-b2a5-18e83d68aac7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:45 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c5be43be4f18623af7894625e337eeb187d588c0e78f5f686988f8c5118c154/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:06:45 localhost nova_compute[280168]: 2025-11-28 10:06:45.435 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:45 localhost podman[317819]: 2025-11-28 10:06:45.443791226 +0000 UTC m=+0.177138479 container init 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:06:45 localhost podman[317819]: 2025-11-28 10:06:45.451893857 +0000 UTC m=+0.185241110 container start 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:45 localhost dnsmasq[317837]: started, version 2.85 cachesize 150 Nov 28 05:06:45 localhost dnsmasq[317837]: DNS service limited to local subnets Nov 28 05:06:45 localhost dnsmasq[317837]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:06:45 localhost dnsmasq[317837]: warning: no upstream servers configured Nov 28 05:06:45 localhost dnsmasq-dhcp[317837]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:06:45 localhost dnsmasq[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/addn_hosts - 0 addresses Nov 28 05:06:45 localhost dnsmasq-dhcp[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/host Nov 28 05:06:45 localhost dnsmasq-dhcp[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/opts Nov 28 05:06:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:06:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2385808969' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:06:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:06:45 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2385808969' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:06:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:45.748 261346 INFO neutron.agent.dhcp.agent [None req-103cbf91-8e88-4f3c-a833-f7875801133c - - - - - -] DHCP configuration for ports {'e05b8b57-3cba-4858-b841-64f615521ed6'} is completed#033[00m Nov 28 05:06:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 174 active+clean; 598 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 128 KiB/s rd, 33 MiB/s wr, 185 op/s Nov 28 05:06:45 localhost dnsmasq[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/addn_hosts - 0 addresses Nov 28 05:06:45 localhost dnsmasq-dhcp[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/host Nov 28 05:06:45 localhost dnsmasq-dhcp[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/opts Nov 28 05:06:45 localhost podman[317855]: 2025-11-28 10:06:45.965345027 +0000 UTC m=+0.045868698 container kill 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:46.126 2 INFO neutron.agent.securitygroups_rpc [None req-33288b8b-e565-4871-a7ed-f7e6a715772e 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:46 localhost nova_compute[280168]: 2025-11-28 10:06:46.147 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:46 localhost kernel: device tapb856d1b5-24 left promiscuous mode Nov 28 05:06:46 localhost nova_compute[280168]: 2025-11-28 10:06:46.163 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.170 261346 INFO neutron.agent.dhcp.agent [None req-2cb02d81-5c29-434b-8adc-47b9885c74db - - - - - -] DHCP configuration for ports {'e05b8b57-3cba-4858-b841-64f615521ed6'} is completed#033[00m Nov 28 05:06:46 localhost dnsmasq[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/addn_hosts - 0 addresses Nov 28 05:06:46 localhost dnsmasq-dhcp[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/host Nov 28 05:06:46 localhost dnsmasq-dhcp[317837]: read /var/lib/neutron/dhcp/c97d9106-1e14-4509-8744-407acebde871/opts Nov 28 05:06:46 localhost podman[317894]: 2025-11-28 10:06:46.391513493 +0000 UTC m=+0.064050668 container kill 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent [None req-9120482d-36a1-4d6e-8558-9d87ecbe8bf9 - - - - - -] Unable to reload_allocations dhcp for c97d9106-1e14-4509-8744-407acebde871.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapb856d1b5-24 not found in namespace qdhcp-c97d9106-1e14-4509-8744-407acebde871. Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent Traceback (most recent call last): Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent rv = getattr(driver, action)(**action_kwargs) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent self.device_manager.update(self.network, self.interface_name) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent self._set_default_route(network, device_name) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent self._set_default_route_ip_version(network, device_name, Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent gateway = device.route.get_gateway(ip_version=ip_version) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent routes = self.list_routes(ip_version, scope=scope, table=table) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent return list_ip_routes(self._parent.namespace, ip_version, scope=scope, Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent routes = privileged.list_ip_routes(namespace, ip_version, device=device, Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent return self(f, *args, **kw) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__ Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent do = self.iter(retry_state=retry_state) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent return fut.result() Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent return self.__get_result() Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent raise self._exception Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__ Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent result = fn(*args, **kwargs) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent return self.channel.remote_call(name, args, kwargs, Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent raise exc_type(*result[2]) Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapb856d1b5-24 not found in namespace qdhcp-c97d9106-1e14-4509-8744-407acebde871. Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.422 261346 ERROR neutron.agent.dhcp.agent #033[00m Nov 28 05:06:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:46.426 261346 INFO neutron.agent.dhcp.agent [None req-ba59b665-bbba-47f6-9efa-63a243c41137 - - - - - -] Synchronizing state#033[00m Nov 28 05:06:46 localhost nova_compute[280168]: 2025-11-28 10:06:46.586 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:46.770 2 INFO neutron.agent.securitygroups_rpc [None req-92a83df5-6e17-4f34-81bc-c1c1907c0872 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:06:46 localhost nova_compute[280168]: 2025-11-28 10:06:46.994 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:46.994 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:46 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:46.996 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:06:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:47.090 261346 INFO neutron.agent.dhcp.agent [None req-836082ad-d917-465d-b846-8d8d23285b85 - - - - - -] All active networks have been fetched through RPC.#033[00m Nov 28 05:06:47 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:47.168 2 INFO neutron.agent.securitygroups_rpc [None req-09174bbe-f95e-4ff7-9cd4-901486ff4f01 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.260 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:06:47 localhost podman[317925]: 2025-11-28 10:06:47.263568133 +0000 UTC m=+0.053674678 container kill 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 05:06:47 localhost dnsmasq[317837]: exiting on receipt of SIGTERM Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:47 localhost systemd[1]: libpod-90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108.scope: Deactivated successfully. Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.266 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.290 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.291 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.291 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.292 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.292 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:06:47 localhost podman[317937]: 2025-11-28 10:06:47.329648513 +0000 UTC m=+0.052939815 container died 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:47 localhost systemd[1]: tmp-crun.ar2nUr.mount: Deactivated successfully. Nov 28 05:06:47 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108-userdata-shm.mount: Deactivated successfully. Nov 28 05:06:47 localhost systemd[1]: var-lib-containers-storage-overlay-0c5be43be4f18623af7894625e337eeb187d588c0e78f5f686988f8c5118c154-merged.mount: Deactivated successfully. Nov 28 05:06:47 localhost podman[317937]: 2025-11-28 10:06:47.382346609 +0000 UTC m=+0.105637871 container cleanup 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:47 localhost systemd[1]: libpod-conmon-90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108.scope: Deactivated successfully. Nov 28 05:06:47 localhost podman[317939]: 2025-11-28 10:06:47.465402873 +0000 UTC m=+0.180505952 container remove 90d3c2900ca27dbb7e1b18db7709b8400d197bf203912f3ec75ad9fd66320108 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c97d9106-1e14-4509-8744-407acebde871, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:06:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:47.499 261346 INFO neutron.agent.dhcp.agent [-] Starting network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb dhcp configuration#033[00m Nov 28 05:06:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:47.499 261346 INFO neutron.agent.dhcp.agent [-] Finished network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb dhcp configuration#033[00m Nov 28 05:06:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:47.499 261346 INFO neutron.agent.dhcp.agent [-] Starting network 744b5a82-3c5c-4b41-ba44-527244a209c4 dhcp configuration#033[00m Nov 28 05:06:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:47.499 261346 INFO neutron.agent.dhcp.agent [-] Finished network 744b5a82-3c5c-4b41-ba44-527244a209c4 dhcp configuration#033[00m Nov 28 05:06:47 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:47.499 261346 INFO neutron.agent.dhcp.agent [None req-ccd65a32-33df-4120-8f5f-c333fd0d9b15 - - - - - -] Synchronizing state complete#033[00m Nov 28 05:06:47 localhost systemd[1]: run-netns-qdhcp\x2dc97d9106\x2d1e14\x2d4509\x2d8744\x2d407acebde871.mount: Deactivated successfully. Nov 28 05:06:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:06:47 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3613923806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.777 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:06:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 633 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 119 KiB/s rd, 21 MiB/s wr, 168 op/s Nov 28 05:06:47 localhost nova_compute[280168]: 2025-11-28 10:06:47.809 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.025 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.027 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11511MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.028 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.028 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:06:48 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:48.347 2 INFO neutron.agent.securitygroups_rpc [None req-5e549f88-7906-4d9c-9524-bb7ac558174f 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.410 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.412 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.433 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.463 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.464 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.511 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.576 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 05:06:48 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:06:48 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:06:48 localhost nova_compute[280168]: 2025-11-28 10:06:48.618 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:06:48 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:06:48.624+0000 7fcc87448640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:06:48.624+0000 7fcc87448640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:06:48.624+0000 7fcc87448640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:06:48.624+0000 7fcc87448640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:06:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:06:48.624+0000 7fcc87448640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:06:49 localhost nova_compute[280168]: 2025-11-28 10:06:49.040 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:49 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:06:49 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:06:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:06:49 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "format": "json"}]: dispatch Nov 28 05:06:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:06:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:06:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:06:49 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/419078793' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:06:49 localhost nova_compute[280168]: 2025-11-28 10:06:49.202 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:06:49 localhost nova_compute[280168]: 2025-11-28 10:06:49.209 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:06:49 localhost nova_compute[280168]: 2025-11-28 10:06:49.232 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:06:49 localhost nova_compute[280168]: 2025-11-28 10:06:49.235 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:06:49 localhost nova_compute[280168]: 2025-11-28 10:06:49.235 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:06:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 633 MiB data, 2.2 GiB used, 40 GiB / 42 GiB avail; 100 KiB/s rd, 18 MiB/s wr, 142 op/s Nov 28 05:06:49 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:49.867 2 INFO neutron.agent.securitygroups_rpc [None req-be3aec7b-abf4-4211-97ca-2df83bf1c365 87cf17c48bab44279b2e7eca9e0882a2 d3c0d1ce8d854a7b9ffc953e88cd2c44 - - default default] Security group rule updated ['c52603b5-5f47-4123-b8fe-cc9f0a56d914']#033[00m Nov 28 05:06:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:49.997 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:06:50 localhost nova_compute[280168]: 2025-11-28 10:06:50.232 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:50 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:50.319 2 INFO neutron.agent.securitygroups_rpc [None req-500702cc-90b6-4aaa-a265-7d7caae35722 87cf17c48bab44279b2e7eca9e0882a2 d3c0d1ce8d854a7b9ffc953e88cd2c44 - - default default] Security group rule updated ['c52603b5-5f47-4123-b8fe-cc9f0a56d914']#033[00m Nov 28 05:06:50 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e187 e187: 6 total, 6 up, 6 in Nov 28 05:06:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:50.850 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:06:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:50.850 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:06:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:50.850 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:06:51 localhost nova_compute[280168]: 2025-11-28 10:06:51.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:06:51 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:51.734 2 INFO neutron.agent.securitygroups_rpc [None req-7220764d-588c-486e-8a18-42779bef93d4 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:06:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:06:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 745 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 115 KiB/s rd, 34 MiB/s wr, 169 op/s Nov 28 05:06:52 localhost nova_compute[280168]: 2025-11-28 10:06:52.811 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 745 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 96 KiB/s rd, 29 MiB/s wr, 142 op/s Nov 28 05:06:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1555194e-595d-4b14-be99-858d500899d3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:06:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:06:54 localhost nova_compute[280168]: 2025-11-28 10:06:54.096 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1555194e-595d-4b14-be99-858d500899d3/.meta.tmp' Nov 28 05:06:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1555194e-595d-4b14-be99-858d500899d3/.meta.tmp' to config b'/volumes/_nogroup/1555194e-595d-4b14-be99-858d500899d3/.meta' Nov 28 05:06:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:06:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1555194e-595d-4b14-be99-858d500899d3", "format": "json"}]: dispatch Nov 28 05:06:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:06:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:06:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:06:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 745 MiB data, 2.5 GiB used, 39 GiB / 42 GiB avail; 81 KiB/s rd, 24 MiB/s wr, 119 op/s Nov 28 05:06:55 localhost neutron_sriov_agent[254415]: 2025-11-28 10:06:55.879 2 INFO neutron.agent.securitygroups_rpc [None req-3852d4d4-6237-4177-8a94-f6db1e58a4b7 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:06:55 localhost systemd[1]: tmp-crun.6NLN3X.mount: Deactivated successfully. Nov 28 05:06:55 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:55.887 261346 INFO neutron.agent.linux.ip_lib [None req-27e4944a-e3ba-46ce-9d0d-ad2f93e66f41 - - - - - -] Device tape1272867-53 cannot be used as it has no MAC address#033[00m Nov 28 05:06:55 localhost podman[318023]: 2025-11-28 10:06:55.896906573 +0000 UTC m=+0.114254109 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal) Nov 28 05:06:55 localhost nova_compute[280168]: 2025-11-28 10:06:55.929 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:55 localhost kernel: device tape1272867-53 entered promiscuous mode Nov 28 05:06:55 localhost podman[318023]: 2025-11-28 10:06:55.936159734 +0000 UTC m=+0.153507210 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc.) Nov 28 05:06:55 localhost ovn_controller[152726]: 2025-11-28T10:06:55Z|00167|binding|INFO|Claiming lport e1272867-532b-4f64-b1d3-8e10c12195a2 for this chassis. Nov 28 05:06:55 localhost NetworkManager[5965]: [1764324415.9398] manager: (tape1272867-53): new Generic device (/org/freedesktop/NetworkManager/Devices/37) Nov 28 05:06:55 localhost ovn_controller[152726]: 2025-11-28T10:06:55Z|00168|binding|INFO|e1272867-532b-4f64-b1d3-8e10c12195a2: Claiming unknown Nov 28 05:06:55 localhost nova_compute[280168]: 2025-11-28 10:06:55.938 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:55 localhost systemd-udevd[318050]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:06:55 localhost ovn_controller[152726]: 2025-11-28T10:06:55Z|00169|binding|INFO|Setting lport e1272867-532b-4f64-b1d3-8e10c12195a2 ovn-installed in OVS Nov 28 05:06:55 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:55 localhost nova_compute[280168]: 2025-11-28 10:06:55.970 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:55 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:55 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:55 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:06:55 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:55 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:55 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:56 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:56 localhost journal[228057]: ethtool ioctl error on tape1272867-53: No such device Nov 28 05:06:56 localhost nova_compute[280168]: 2025-11-28 10:06:56.016 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:56 localhost nova_compute[280168]: 2025-11-28 10:06:56.042 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:56 localhost ovn_controller[152726]: 2025-11-28T10:06:56Z|00170|binding|INFO|Setting lport e1272867-532b-4f64-b1d3-8e10c12195a2 up in Southbound Nov 28 05:06:56 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:56.156 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-5d55e4ac-ff66-4512-9e4c-487c005fe37c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d55e4ac-ff66-4512-9e4c-487c005fe37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=211a2e3d-b3b5-43ed-95be-d9132daa612f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e1272867-532b-4f64-b1d3-8e10c12195a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:06:56 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:56.158 158530 INFO neutron.agent.ovn.metadata.agent [-] Port e1272867-532b-4f64-b1d3-8e10c12195a2 in datapath 5d55e4ac-ff66-4512-9e4c-487c005fe37c bound to our chassis#033[00m Nov 28 05:06:56 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:56.161 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port db635ae2-689f-4d6e-a9b9-1757ca00c3a5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:06:56 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:56.161 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d55e4ac-ff66-4512-9e4c-487c005fe37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:06:56 localhost ovn_metadata_agent[158525]: 2025-11-28 10:06:56.162 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[50fa527b-a47e-4b9d-96a9-636a749b6623]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:06:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:06:57 localhost openstack_network_exporter[240973]: ERROR 10:06:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:06:57 localhost openstack_network_exporter[240973]: ERROR 10:06:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:06:57 localhost openstack_network_exporter[240973]: ERROR 10:06:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:06:57 localhost openstack_network_exporter[240973]: ERROR 10:06:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:06:57 localhost openstack_network_exporter[240973]: Nov 28 05:06:57 localhost openstack_network_exporter[240973]: ERROR 10:06:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:06:57 localhost openstack_network_exporter[240973]: Nov 28 05:06:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 865 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 19 KiB/s rd, 23 MiB/s wr, 36 op/s Nov 28 05:06:57 localhost nova_compute[280168]: 2025-11-28 10:06:57.828 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "1555194e-595d-4b14-be99-858d500899d3", "new_size": 2147483648, "format": "json"}]: dispatch Nov 28 05:06:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:06:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:06:58 localhost podman[318121]: Nov 28 05:06:58 localhost podman[318121]: 2025-11-28 10:06:58.63932373 +0000 UTC m=+0.129729156 container create 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS) Nov 28 05:06:58 localhost podman[318121]: 2025-11-28 10:06:58.55087604 +0000 UTC m=+0.041281496 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:06:58 localhost systemd[1]: Started libpod-conmon-6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792.scope. Nov 28 05:06:58 localhost systemd[1]: tmp-crun.KYZd7U.mount: Deactivated successfully. Nov 28 05:06:58 localhost systemd[1]: Started libcrun container. Nov 28 05:06:58 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce0eb26f941760bc165071691d15f2ff2f716526f20d4e0a559e47f82947841/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:06:58 localhost podman[318121]: 2025-11-28 10:06:58.720259029 +0000 UTC m=+0.210664445 container init 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:06:58 localhost podman[318121]: 2025-11-28 10:06:58.732409034 +0000 UTC m=+0.222814450 container start 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:06:58 localhost dnsmasq[318139]: started, version 2.85 cachesize 150 Nov 28 05:06:58 localhost dnsmasq[318139]: DNS service limited to local subnets Nov 28 05:06:58 localhost dnsmasq[318139]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:06:58 localhost dnsmasq[318139]: warning: no upstream servers configured Nov 28 05:06:58 localhost dnsmasq-dhcp[318139]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:06:58 localhost dnsmasq[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/addn_hosts - 0 addresses Nov 28 05:06:58 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/host Nov 28 05:06:58 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/opts Nov 28 05:06:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:58.793 261346 INFO neutron.agent.dhcp.agent [None req-063925d6-8b30-43b7-98cc-07bf1eb3d373 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:55Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c60962a6-b068-4e59-9a3d-1385700e4916, ip_allocation=immediate, mac_address=fa:16:3e:77:98:c0, name=tempest-PortsTestJSON-183814121, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:51Z, description=, dns_domain=, id=5d55e4ac-ff66-4512-9e4c-487c005fe37c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1418877084, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=19323, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2432, status=ACTIVE, subnets=['f0db51dd-f2de-43a4-beee-2ae93e9b7dfe'], tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:06:53Z, vlan_transparent=None, network_id=5d55e4ac-ff66-4512-9e4c-487c005fe37c, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b'], standard_attr_id=2439, status=DOWN, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:06:55Z on network 5d55e4ac-ff66-4512-9e4c-487c005fe37c#033[00m Nov 28 05:06:58 localhost podman[239012]: time="2025-11-28T10:06:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:06:58 localhost podman[239012]: @ - - [28/Nov/2025:10:06:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159978 "" "Go-http-client/1.1" Nov 28 05:06:58 localhost podman[239012]: @ - - [28/Nov/2025:10:06:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20149 "" "Go-http-client/1.1" Nov 28 05:06:59 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:06:59.092 261346 INFO neutron.agent.dhcp.agent [None req-03e9dfeb-0339-4c4f-8eea-327a3f72d469 - - - - - -] DHCP configuration for ports {'21912150-8f04-412e-8741-004f8151ee9a'} is completed#033[00m Nov 28 05:06:59 localhost nova_compute[280168]: 2025-11-28 10:06:59.133 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:06:59 localhost dnsmasq[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/addn_hosts - 1 addresses Nov 28 05:06:59 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/host Nov 28 05:06:59 localhost podman[318157]: 2025-11-28 10:06:59.268814432 +0000 UTC m=+0.065415040 container kill 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:06:59 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/opts Nov 28 05:06:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 865 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 19 KiB/s rd, 23 MiB/s wr, 36 op/s Nov 28 05:07:01 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:01.435 261346 INFO neutron.agent.dhcp.agent [None req-e920b98c-c2f6-4088-abca-741f8898d45f - - - - - -] DHCP configuration for ports {'c60962a6-b068-4e59-9a3d-1385700e4916'} is completed#033[00m Nov 28 05:07:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 1.0 GiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 52 KiB/s rd, 33 MiB/s wr, 87 op/s Nov 28 05:07:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1555194e-595d-4b14-be99-858d500899d3", "format": "json"}]: dispatch Nov 28 05:07:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1555194e-595d-4b14-be99-858d500899d3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1555194e-595d-4b14-be99-858d500899d3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.066+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1555194e-595d-4b14-be99-858d500899d3' of type subvolume Nov 28 05:07:02 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1555194e-595d-4b14-be99-858d500899d3' of type subvolume Nov 28 05:07:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1555194e-595d-4b14-be99-858d500899d3", "force": true, "format": "json"}]: dispatch Nov 28 05:07:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:07:02 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1555194e-595d-4b14-be99-858d500899d3'' moved to trashcan Nov 28 05:07:02 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:07:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1555194e-595d-4b14-be99-858d500899d3, vol_name:cephfs) < "" Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.090+0000 7fcc8a44e640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.090+0000 7fcc8a44e640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.090+0000 7fcc8a44e640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.090+0000 7fcc8a44e640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.090+0000 7fcc8a44e640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.123+0000 7fcc8944c640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.123+0000 7fcc8944c640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.123+0000 7fcc8944c640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.123+0000 7fcc8944c640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:02.123+0000 7fcc8944c640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:07:02 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:02.159 261346 INFO neutron.agent.linux.ip_lib [None req-d3b8a2e5-4327-44cd-b9f2-6ad11454dc94 - - - - - -] Device tap8bc6a73d-61 cannot be used as it has no MAC address#033[00m Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.183 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:02 localhost kernel: device tap8bc6a73d-61 entered promiscuous mode Nov 28 05:07:02 localhost NetworkManager[5965]: [1764324422.1903] manager: (tap8bc6a73d-61): new Generic device (/org/freedesktop/NetworkManager/Devices/38) Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.192 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:02 localhost ovn_controller[152726]: 2025-11-28T10:07:02Z|00171|binding|INFO|Claiming lport 8bc6a73d-610f-4f06-b515-26f3efcf46a4 for this chassis. Nov 28 05:07:02 localhost ovn_controller[152726]: 2025-11-28T10:07:02Z|00172|binding|INFO|8bc6a73d-610f-4f06-b515-26f3efcf46a4: Claiming unknown Nov 28 05:07:02 localhost systemd-udevd[318211]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:02.219 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8462a4a9a313405e8fd212f9ec4a0c92', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe1e7b19-836c-4f4d-9811-92d20be8712f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=8bc6a73d-610f-4f06-b515-26f3efcf46a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:02 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:02.221 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8bc6a73d-610f-4f06-b515-26f3efcf46a4 in datapath 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb bound to our chassis#033[00m Nov 28 05:07:02 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:02.223 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:07:02 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:02.224 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[cafa7bf9-e1ae-4b79-8348-643354c8bcea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.235 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost ovn_controller[152726]: 2025-11-28T10:07:02Z|00173|binding|INFO|Setting lport 8bc6a73d-610f-4f06-b515-26f3efcf46a4 ovn-installed in OVS Nov 28 05:07:02 localhost ovn_controller[152726]: 2025-11-28T10:07:02Z|00174|binding|INFO|Setting lport 8bc6a73d-610f-4f06-b515-26f3efcf46a4 up in Southbound Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.240 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost journal[228057]: ethtool ioctl error on tap8bc6a73d-61: No such device Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.272 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.296 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:02 localhost nova_compute[280168]: 2025-11-28 10:07:02.876 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:03.086 2 INFO neutron.agent.securitygroups_rpc [req-e18f2dde-6bdd-4008-97a4-be84187f4807 req-3c6747af-afe3-40df-86cf-89416982a794 87cf17c48bab44279b2e7eca9e0882a2 d3c0d1ce8d854a7b9ffc953e88cd2c44 - - default default] Security group member updated ['c52603b5-5f47-4123-b8fe-cc9f0a56d914']#033[00m Nov 28 05:07:03 localhost podman[318282]: Nov 28 05:07:03 localhost podman[318282]: 2025-11-28 10:07:03.182258328 +0000 UTC m=+0.096728536 container create 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:03.185 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:02Z, description=, device_id=640e688c-f2ca-49b5-a84f-ca1ea976a9cd, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=79663a4e-2979-44db-bdea-40e4855cb323, ip_allocation=immediate, mac_address=fa:16:3e:be:70:a6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:04Z, description=, dns_domain=, id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-1354719988-network, port_security_enabled=True, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64958, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2202, status=ACTIVE, subnets=['f4b4dc5d-f654-46e4-8ff2-bd52eff10306'], tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:06:06Z, vlan_transparent=None, network_id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, port_security_enabled=True, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c52603b5-5f47-4123-b8fe-cc9f0a56d914'], standard_attr_id=2457, status=DOWN, tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:07:02Z on network b2c4ac07-8851-40d3-9495-d0489b67c4c3#033[00m Nov 28 05:07:03 localhost systemd[1]: Started libpod-conmon-0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184.scope. Nov 28 05:07:03 localhost podman[318282]: 2025-11-28 10:07:03.137675102 +0000 UTC m=+0.052145330 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:03 localhost systemd[1]: tmp-crun.EpHQew.mount: Deactivated successfully. Nov 28 05:07:03 localhost systemd[1]: Started libcrun container. Nov 28 05:07:03 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30c06b143ece65045ee680ac009c0929cadf7b730be6fdcddc88c7afdc918bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:03 localhost podman[318282]: 2025-11-28 10:07:03.264942951 +0000 UTC m=+0.179413159 container init 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:03 localhost podman[318282]: 2025-11-28 10:07:03.274794935 +0000 UTC m=+0.189265143 container start 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:07:03 localhost dnsmasq[318300]: started, version 2.85 cachesize 150 Nov 28 05:07:03 localhost dnsmasq[318300]: DNS service limited to local subnets Nov 28 05:07:03 localhost dnsmasq[318300]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:03 localhost dnsmasq[318300]: warning: no upstream servers configured Nov 28 05:07:03 localhost dnsmasq-dhcp[318300]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:07:03 localhost dnsmasq[318300]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 0 addresses Nov 28 05:07:03 localhost dnsmasq-dhcp[318300]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:03 localhost dnsmasq-dhcp[318300]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:03 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 2 addresses Nov 28 05:07:03 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:07:03 localhost podman[318316]: 2025-11-28 10:07:03.452940445 +0000 UTC m=+0.058234409 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:03 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:07:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:03.730 261346 INFO neutron.agent.dhcp.agent [None req-f3624154-5d96-47e2-aec1-9eb16f53c2cb - - - - - -] DHCP configuration for ports {'e18fca2e-eaeb-40cf-9eb1-203ecf5b0aa2', '8646aa95-6463-44cd-8c34-1bec1705e23b'} is completed#033[00m Nov 28 05:07:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 1.0 GiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 41 KiB/s rd, 22 MiB/s wr, 67 op/s Nov 28 05:07:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:03.897 261346 INFO neutron.agent.dhcp.agent [None req-00389324-b119-48e0-8aff-a52530bd8d2a - - - - - -] DHCP configuration for ports {'79663a4e-2979-44db-bdea-40e4855cb323'} is completed#033[00m Nov 28 05:07:04 localhost nova_compute[280168]: 2025-11-28 10:07:04.180 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:04.544 2 INFO neutron.agent.securitygroups_rpc [None req-ae7d6dfb-c119-45e0-bfec-235340ad22c9 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['19d31bf3-ea7b-49ec-820d-ba3fe5752e88']#033[00m Nov 28 05:07:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:04.877 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a9243b0d-5608-4dd1-bf07-987690272773, ip_allocation=immediate, mac_address=fa:16:3e:3a:d2:78, name=tempest-PortsIpV6TestJSON-388426784, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:07Z, description=, dns_domain=, id=719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1946008216, port_security_enabled=True, project_id=8462a4a9a313405e8fd212f9ec4a0c92, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49828, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2226, status=ACTIVE, subnets=['5d4879f5-1341-4004-892c-8f9038b89398'], tags=[], tenant_id=8462a4a9a313405e8fd212f9ec4a0c92, updated_at=2025-11-28T10:06:56Z, vlan_transparent=None, network_id=719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, port_security_enabled=True, project_id=8462a4a9a313405e8fd212f9ec4a0c92, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['19d31bf3-ea7b-49ec-820d-ba3fe5752e88'], standard_attr_id=2462, status=DOWN, tags=[], tenant_id=8462a4a9a313405e8fd212f9ec4a0c92, updated_at=2025-11-28T10:07:04Z on network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb#033[00m Nov 28 05:07:05 localhost podman[318354]: 2025-11-28 10:07:05.058011523 +0000 UTC m=+0.047909550 container kill 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:07:05 localhost dnsmasq[318300]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 1 addresses Nov 28 05:07:05 localhost dnsmasq-dhcp[318300]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:05 localhost dnsmasq-dhcp[318300]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:05 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:05.545 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005538514.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:02Z, description=, device_id=640e688c-f2ca-49b5-a84f-ca1ea976a9cd, device_owner=compute:nova, dns_assignment=[], dns_domain=, dns_name=tempest-volumesbackupstest-instance-1114210035, extra_dhcp_opts=[], fixed_ips=[], id=79663a4e-2979-44db-bdea-40e4855cb323, ip_allocation=immediate, mac_address=fa:16:3e:be:70:a6, name=, network_id=b2c4ac07-8851-40d3-9495-d0489b67c4c3, port_security_enabled=True, project_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['c52603b5-5f47-4123-b8fe-cc9f0a56d914'], standard_attr_id=2457, status=DOWN, tags=[], tenant_id=d3c0d1ce8d854a7b9ffc953e88cd2c44, updated_at=2025-11-28T10:07:04Z on network b2c4ac07-8851-40d3-9495-d0489b67c4c3#033[00m Nov 28 05:07:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:07:05 Nov 28 05:07:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:07:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:07:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_data', 'vms', 'images', 'backups', 'manila_metadata', 'volumes', '.mgr'] Nov 28 05:07:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:07:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:07:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4069209835' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:07:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:07:05 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4069209835' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:07:05 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:05.670 261346 INFO neutron.agent.dhcp.agent [None req-3a4efc61-bf3f-4e53-bf51-d66fd999eedf - - - - - -] DHCP configuration for ports {'a9243b0d-5608-4dd1-bf07-987690272773'} is completed#033[00m Nov 28 05:07:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:07:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:07:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:07:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:07:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:07:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:07:05 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 2 addresses Nov 28 05:07:05 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:07:05 localhost podman[318391]: 2025-11-28 10:07:05.79067727 +0000 UTC m=+0.061367585 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:05 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:07:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 1.0 GiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 41 KiB/s rd, 22 MiB/s wr, 67 op/s Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004811110674902289 of space, bias 1.0, pg target 0.9622221349804577 quantized to 32 (current 32) Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.271566164154104e-06 of space, bias 1.0, pg target 0.0006521321887213847 quantized to 32 (current 32) Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.06404308870742602 of space, bias 1.0, pg target 12.765922349013586 quantized to 32 (current 32) Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.089102922017495e-05 quantized to 32 (current 32) Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001017820584403499 quantized to 32 (current 32) Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:07:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 6.543132328308208e-06 of space, bias 4.0, pg target 0.004885538805136795 quantized to 16 (current 16) Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:07:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:07:06 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:06.613 261346 INFO neutron.agent.dhcp.agent [None req-e8bfdb4d-0326-4c55-ab5f-ac0c01276607 - - - - - -] DHCP configuration for ports {'79663a4e-2979-44db-bdea-40e4855cb323'} is completed#033[00m Nov 28 05:07:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:07:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:07:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:07:06 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:07:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f30195cf-a779-4b65-9774-df0ab49a62cf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:07:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:06 localhost systemd[1]: tmp-crun.ZCIpWg.mount: Deactivated successfully. Nov 28 05:07:07 localhost podman[318414]: 2025-11-28 10:07:07.004237242 +0000 UTC m=+0.095093646 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:07 localhost podman[318414]: 2025-11-28 10:07:07.034410003 +0000 UTC m=+0.125266397 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:07 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:07:07 localhost podman[318412]: 2025-11-28 10:07:07.054132793 +0000 UTC m=+0.152794698 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:07 localhost podman[318413]: 2025-11-28 10:07:07.100547585 +0000 UTC m=+0.197097975 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:07:07 localhost podman[318412]: 2025-11-28 10:07:07.170946508 +0000 UTC m=+0.269608483 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f30195cf-a779-4b65-9774-df0ab49a62cf/.meta.tmp' Nov 28 05:07:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f30195cf-a779-4b65-9774-df0ab49a62cf/.meta.tmp' to config b'/volumes/_nogroup/f30195cf-a779-4b65-9774-df0ab49a62cf/.meta' Nov 28 05:07:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f30195cf-a779-4b65-9774-df0ab49a62cf", "format": "json"}]: dispatch Nov 28 05:07:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:07 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:07:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:07 localhost podman[318413]: 2025-11-28 10:07:07.186995214 +0000 UTC m=+0.283545554 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0) Nov 28 05:07:07 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:07:07 localhost podman[318415]: 2025-11-28 10:07:07.261267756 +0000 UTC m=+0.349296603 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:07:07 localhost podman[318415]: 2025-11-28 10:07:07.296240536 +0000 UTC m=+0.384269373 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:07:07 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:07:07 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:07.526 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:06:55Z, description=, device_id=113746df-806c-4ec0-9f15-ab9153798c56, device_owner=network:router_interface, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=c60962a6-b068-4e59-9a3d-1385700e4916, ip_allocation=immediate, mac_address=fa:16:3e:77:98:c0, name=tempest-PortsTestJSON-183814121, network_id=5d55e4ac-ff66-4512-9e4c-487c005fe37c, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b'], standard_attr_id=2439, status=ACTIVE, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:03Z on network 5d55e4ac-ff66-4512-9e4c-487c005fe37c#033[00m Nov 28 05:07:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 67 KiB/s rd, 32 MiB/s wr, 106 op/s Nov 28 05:07:07 localhost nova_compute[280168]: 2025-11-28 10:07:07.926 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:08 localhost dnsmasq[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/addn_hosts - 1 addresses Nov 28 05:07:08 localhost podman[318510]: 2025-11-28 10:07:08.013319342 +0000 UTC m=+0.061012424 container kill 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 05:07:08 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/host Nov 28 05:07:08 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/opts Nov 28 05:07:08 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:08.545 261346 INFO neutron.agent.dhcp.agent [None req-d7f2dc29-1cb1-460b-a486-d3f02a5c966a - - - - - -] DHCP configuration for ports {'c60962a6-b068-4e59-9a3d-1385700e4916'} is completed#033[00m Nov 28 05:07:09 localhost nova_compute[280168]: 2025-11-28 10:07:09.223 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:09 localhost podman[318550]: 2025-11-28 10:07:09.246087137 +0000 UTC m=+0.063829021 container kill 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 28 05:07:09 localhost dnsmasq[318300]: exiting on receipt of SIGTERM Nov 28 05:07:09 localhost systemd[1]: tmp-crun.dPMdfv.mount: Deactivated successfully. Nov 28 05:07:09 localhost systemd[1]: libpod-0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184.scope: Deactivated successfully. Nov 28 05:07:09 localhost podman[318563]: 2025-11-28 10:07:09.302517849 +0000 UTC m=+0.042280936 container died 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:07:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:07:09 localhost systemd[1]: var-lib-containers-storage-overlay-b30c06b143ece65045ee680ac009c0929cadf7b730be6fdcddc88c7afdc918bb-merged.mount: Deactivated successfully. Nov 28 05:07:09 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:09 localhost podman[318563]: 2025-11-28 10:07:09.345292079 +0000 UTC m=+0.085055156 container remove 0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2) Nov 28 05:07:09 localhost systemd[1]: libpod-conmon-0e83d0c2461a254042e294a6421feea4104273067078a5ad2e14050a33c19184.scope: Deactivated successfully. Nov 28 05:07:09 localhost podman[318591]: 2025-11-28 10:07:09.414236948 +0000 UTC m=+0.066735901 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:07:09 localhost podman[318591]: 2025-11-28 10:07:09.421556014 +0000 UTC m=+0.074054957 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:07:09 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:07:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e188 e188: 6 total, 6 up, 6 in Nov 28 05:07:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 1.1 GiB data, 3.7 GiB used, 38 GiB / 42 GiB avail; 72 KiB/s rd, 26 MiB/s wr, 109 op/s Nov 28 05:07:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "f30195cf-a779-4b65-9774-df0ab49a62cf", "new_size": 2147483648, "format": "json"}]: dispatch Nov 28 05:07:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:10.641 2 INFO neutron.agent.securitygroups_rpc [None req-570a0175-8080-4b40-9e80-d9942b63779e e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['c81397c2-33ee-481d-8257-b39c2b0c331e', '19d31bf3-ea7b-49ec-820d-ba3fe5752e88']#033[00m Nov 28 05:07:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:10.645 2 INFO neutron.agent.securitygroups_rpc [None req-22769ed5-ed71-4ef8-ab49-99801270d0d3 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:07:11 localhost dnsmasq[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/addn_hosts - 0 addresses Nov 28 05:07:11 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/host Nov 28 05:07:11 localhost dnsmasq-dhcp[318139]: read /var/lib/neutron/dhcp/5d55e4ac-ff66-4512-9e4c-487c005fe37c/opts Nov 28 05:07:11 localhost podman[318657]: 2025-11-28 10:07:11.572365189 +0000 UTC m=+0.059435496 container kill 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e189 e189: 6 total, 6 up, 6 in Nov 28 05:07:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 12 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 157 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 65 KiB/s rd, 22 MiB/s wr, 96 op/s Nov 28 05:07:11 localhost podman[318704]: Nov 28 05:07:11 localhost podman[318704]: 2025-11-28 10:07:11.993032475 +0000 UTC m=+0.091135385 container create 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Nov 28 05:07:12 localhost systemd[1]: Started libpod-conmon-9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b.scope. Nov 28 05:07:12 localhost podman[318704]: 2025-11-28 10:07:11.944204537 +0000 UTC m=+0.042307477 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:12 localhost systemd[1]: Started libcrun container. Nov 28 05:07:12 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd62fdf166d581c6d2b3ef9c0fe541584f5effaa7812f4badcc3cc24df395cfc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:12 localhost podman[318704]: 2025-11-28 10:07:12.058892408 +0000 UTC m=+0.156995318 container init 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:12 localhost podman[318704]: 2025-11-28 10:07:12.073618182 +0000 UTC m=+0.171721092 container start 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:12 localhost dnsmasq[318722]: started, version 2.85 cachesize 150 Nov 28 05:07:12 localhost dnsmasq[318722]: DNS service limited to local subnets Nov 28 05:07:12 localhost dnsmasq[318722]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:12 localhost dnsmasq[318722]: warning: no upstream servers configured Nov 28 05:07:12 localhost dnsmasq-dhcp[318722]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Nov 28 05:07:12 localhost dnsmasq-dhcp[318722]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:07:12 localhost dnsmasq[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 1 addresses Nov 28 05:07:12 localhost dnsmasq-dhcp[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:12 localhost dnsmasq-dhcp[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:12 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:12.142 261346 INFO neutron.agent.dhcp.agent [None req-588228b2-f577-44aa-80f1-e24f139f4962 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:04Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=a9243b0d-5608-4dd1-bf07-987690272773, ip_allocation=immediate, mac_address=fa:16:3e:3a:d2:78, name=tempest-PortsIpV6TestJSON-635648094, network_id=719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, port_security_enabled=True, project_id=8462a4a9a313405e8fd212f9ec4a0c92, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['c81397c2-33ee-481d-8257-b39c2b0c331e'], standard_attr_id=2462, status=DOWN, tags=[], tenant_id=8462a4a9a313405e8fd212f9ec4a0c92, updated_at=2025-11-28T10:07:10Z on network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb#033[00m Nov 28 05:07:12 localhost dnsmasq[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 1 addresses Nov 28 05:07:12 localhost dnsmasq-dhcp[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:12 localhost dnsmasq-dhcp[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:12 localhost podman[318740]: 2025-11-28 10:07:12.346728394 +0000 UTC m=+0.075879564 container kill 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:12 localhost systemd[1]: tmp-crun.lENMz9.mount: Deactivated successfully. Nov 28 05:07:12 localhost nova_compute[280168]: 2025-11-28 10:07:12.629 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:12 localhost ovn_controller[152726]: 2025-11-28T10:07:12Z|00175|binding|INFO|Releasing lport e1272867-532b-4f64-b1d3-8e10c12195a2 from this chassis (sb_readonly=0) Nov 28 05:07:12 localhost kernel: device tape1272867-53 left promiscuous mode Nov 28 05:07:12 localhost ovn_controller[152726]: 2025-11-28T10:07:12Z|00176|binding|INFO|Setting lport e1272867-532b-4f64-b1d3-8e10c12195a2 down in Southbound Nov 28 05:07:12 localhost nova_compute[280168]: 2025-11-28 10:07:12.653 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:12 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:12.738 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-5d55e4ac-ff66-4512-9e4c-487c005fe37c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d55e4ac-ff66-4512-9e4c-487c005fe37c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=211a2e3d-b3b5-43ed-95be-d9132daa612f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=e1272867-532b-4f64-b1d3-8e10c12195a2) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:12 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:12.741 158530 INFO neutron.agent.ovn.metadata.agent [-] Port e1272867-532b-4f64-b1d3-8e10c12195a2 in datapath 5d55e4ac-ff66-4512-9e4c-487c005fe37c unbound from our chassis#033[00m Nov 28 05:07:12 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:12.745 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d55e4ac-ff66-4512-9e4c-487c005fe37c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:12 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:12.746 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[ceef5364-0cfe-4a2a-9c34-05112c33e1dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:12 localhost nova_compute[280168]: 2025-11-28 10:07:12.971 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:13 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:13.158 261346 INFO neutron.agent.dhcp.agent [None req-a84629f1-b95d-4f9e-9f6d-da6d6f22d09c - - - - - -] DHCP configuration for ports {'e18fca2e-eaeb-40cf-9eb1-203ecf5b0aa2', 'a9243b0d-5608-4dd1-bf07-987690272773', '8646aa95-6463-44cd-8c34-1bec1705e23b', '8bc6a73d-610f-4f06-b515-26f3efcf46a4'} is completed#033[00m Nov 28 05:07:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:07:13 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:13.586 261346 INFO neutron.agent.dhcp.agent [None req-b051f84e-683e-4599-af18-b02dd6300d11 - - - - - -] DHCP configuration for ports {'a9243b0d-5608-4dd1-bf07-987690272773'} is completed#033[00m Nov 28 05:07:13 localhost podman[318764]: 2025-11-28 10:07:13.636220429 +0000 UTC m=+0.082933090 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Nov 28 05:07:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f30195cf-a779-4b65-9774-df0ab49a62cf", "format": "json"}]: dispatch Nov 28 05:07:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f30195cf-a779-4b65-9774-df0ab49a62cf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f30195cf-a779-4b65-9774-df0ab49a62cf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:13 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f30195cf-a779-4b65-9774-df0ab49a62cf' of type subvolume Nov 28 05:07:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:13.656+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f30195cf-a779-4b65-9774-df0ab49a62cf' of type subvolume Nov 28 05:07:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f30195cf-a779-4b65-9774-df0ab49a62cf", "force": true, "format": "json"}]: dispatch Nov 28 05:07:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f30195cf-a779-4b65-9774-df0ab49a62cf'' moved to trashcan Nov 28 05:07:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:07:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f30195cf-a779-4b65-9774-df0ab49a62cf, vol_name:cephfs) < "" Nov 28 05:07:13 localhost podman[318764]: 2025-11-28 10:07:13.679552238 +0000 UTC m=+0.126264909 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true) Nov 28 05:07:13 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:07:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 12 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 157 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 65 KiB/s rd, 22 MiB/s wr, 96 op/s Nov 28 05:07:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:07:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2355630358' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:07:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:07:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2355630358' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:07:14 localhost nova_compute[280168]: 2025-11-28 10:07:14.260 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:14 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:14.565 2 INFO neutron.agent.securitygroups_rpc [None req-94433583-aa1a-4467-b851-fbd6872bea34 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['c81397c2-33ee-481d-8257-b39c2b0c331e']#033[00m Nov 28 05:07:14 localhost dnsmasq[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 0 addresses Nov 28 05:07:14 localhost podman[318800]: 2025-11-28 10:07:14.928865724 +0000 UTC m=+0.054183645 container kill 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:07:14 localhost dnsmasq-dhcp[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:14 localhost dnsmasq-dhcp[318722]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 12 active+clean+snaptrim_wait, 8 active+clean+snaptrim, 157 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 26 KiB/s rd, 8.0 MiB/s wr, 38 op/s Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0. Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.445449) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324436445496, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 945, "num_deletes": 255, "total_data_size": 1826301, "memory_usage": 1847528, "flush_reason": "Manual Compaction"} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324436455041, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1203374, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23026, "largest_seqno": 23966, "table_properties": {"data_size": 1199158, "index_size": 1879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10649, "raw_average_key_size": 21, "raw_value_size": 1190251, "raw_average_value_size": 2352, "num_data_blocks": 82, "num_entries": 506, "num_filter_entries": 506, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324396, "oldest_key_time": 1764324396, "file_creation_time": 1764324436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 9675 microseconds, and 3868 cpu microseconds. Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.455122) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1203374 bytes OK Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.455144) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.457172) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.457192) EVENT_LOG_v1 {"time_micros": 1764324436457186, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.457210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1821363, prev total WAL file size 1821363, number of live WAL files 2. Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.457880) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end) Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1175KB)], [33(17MB)] Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324436457926, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 19057342, "oldest_snapshot_seqno": -1} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12756 keys, 17845285 bytes, temperature: kUnknown Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324436603568, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 17845285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17771154, "index_size": 41134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 342052, "raw_average_key_size": 26, "raw_value_size": 17552659, "raw_average_value_size": 1376, "num_data_blocks": 1556, "num_entries": 12756, "num_filter_entries": 12756, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324436, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.604183) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 17845285 bytes Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.605751) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.5 rd, 122.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 17.0 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(30.7) write-amplify(14.8) OK, records in: 13285, records dropped: 529 output_compression: NoCompression Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.605771) EVENT_LOG_v1 {"time_micros": 1764324436605762, "job": 18, "event": "compaction_finished", "compaction_time_micros": 145992, "compaction_time_cpu_micros": 49683, "output_level": 6, "num_output_files": 1, "total_output_size": 17845285, "num_input_records": 13285, "num_output_records": 12756, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324436605985, "job": 18, "event": "table_file_deletion", "file_number": 35} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324436608026, "job": 18, "event": "table_file_deletion", "file_number": 33} Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.457800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.608138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.608148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.608152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.608156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:07:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:07:16.608160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:07:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:16 localhost dnsmasq[318139]: exiting on receipt of SIGTERM Nov 28 05:07:16 localhost podman[318838]: 2025-11-28 10:07:16.842517757 +0000 UTC m=+0.096922703 container kill 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:07:16 localhost systemd[1]: libpod-6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792.scope: Deactivated successfully. Nov 28 05:07:16 localhost podman[318851]: 2025-11-28 10:07:16.906845043 +0000 UTC m=+0.051190922 container died 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:16 localhost podman[318851]: 2025-11-28 10:07:16.934559119 +0000 UTC m=+0.078904948 container cleanup 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2) Nov 28 05:07:16 localhost systemd[1]: libpod-conmon-6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792.scope: Deactivated successfully. Nov 28 05:07:16 localhost podman[318853]: 2025-11-28 10:07:16.99227727 +0000 UTC m=+0.130769128 container remove 6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5d55e4ac-ff66-4512-9e4c-487c005fe37c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:07:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e190 e190: 6 total, 6 up, 6 in Nov 28 05:07:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "74c605fe-3105-407e-80b8-d4fb6f7d4329", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:07:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, vol_name:cephfs) < "" Nov 28 05:07:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/74c605fe-3105-407e-80b8-d4fb6f7d4329/.meta.tmp' Nov 28 05:07:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/74c605fe-3105-407e-80b8-d4fb6f7d4329/.meta.tmp' to config b'/volumes/_nogroup/74c605fe-3105-407e-80b8-d4fb6f7d4329/.meta' Nov 28 05:07:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, vol_name:cephfs) < "" Nov 28 05:07:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "74c605fe-3105-407e-80b8-d4fb6f7d4329", "format": "json"}]: dispatch Nov 28 05:07:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, vol_name:cephfs) < "" Nov 28 05:07:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, vol_name:cephfs) < "" Nov 28 05:07:17 localhost systemd[1]: var-lib-containers-storage-overlay-8ce0eb26f941760bc165071691d15f2ff2f716526f20d4e0a559e47f82947841-merged.mount: Deactivated successfully. Nov 28 05:07:17 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6d17717cf4c78ea98c557933e4bb52f68e6a295d91765178c9115356c8664792-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 192 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 8.0 MiB/s wr, 202 op/s Nov 28 05:07:17 localhost nova_compute[280168]: 2025-11-28 10:07:17.974 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:18 localhost systemd[1]: run-netns-qdhcp\x2d5d55e4ac\x2dff66\x2d4512\x2d9e4c\x2d487c005fe37c.mount: Deactivated successfully. Nov 28 05:07:18 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:18.010 261346 INFO neutron.agent.dhcp.agent [None req-effcfdf6-50c3-4c69-9253-c6a818f91240 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:18 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:18.011 261346 INFO neutron.agent.dhcp.agent [None req-effcfdf6-50c3-4c69-9253-c6a818f91240 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e191 e191: 6 total, 6 up, 6 in Nov 28 05:07:18 localhost dnsmasq[318722]: exiting on receipt of SIGTERM Nov 28 05:07:18 localhost podman[318896]: 2025-11-28 10:07:18.986296755 +0000 UTC m=+0.067549196 container kill 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:07:18 localhost systemd[1]: libpod-9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b.scope: Deactivated successfully. Nov 28 05:07:19 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:18.999 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:19 localhost podman[318909]: 2025-11-28 10:07:19.057506803 +0000 UTC m=+0.056691691 container died 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:19 localhost systemd[1]: tmp-crun.CGxK7m.mount: Deactivated successfully. Nov 28 05:07:19 localhost podman[318909]: 2025-11-28 10:07:19.090338457 +0000 UTC m=+0.089523285 container cleanup 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:19 localhost systemd[1]: libpod-conmon-9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b.scope: Deactivated successfully. Nov 28 05:07:19 localhost podman[318911]: 2025-11-28 10:07:19.141521596 +0000 UTC m=+0.133372847 container remove 9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:19 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:19.301 2 INFO neutron.agent.securitygroups_rpc [None req-5151d855-124b-4682-b39f-fc47e0550bce 6b80a7d6f65f4ebe8363ccaae26f3e87 ae10569a38284f298c961498da620c5f - - default default] Security group member updated ['c5eee24b-0bed-4035-a2ab-e6c531c94e43']#033[00m Nov 28 05:07:19 localhost nova_compute[280168]: 2025-11-28 10:07:19.305 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:19 localhost nova_compute[280168]: 2025-11-28 10:07:19.496 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e192 e192: 6 total, 6 up, 6 in Nov 28 05:07:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v380: 177 pgs: 177 active+clean; 192 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 3.9 MiB/s rd, 36 KiB/s wr, 218 op/s Nov 28 05:07:19 localhost systemd[1]: var-lib-containers-storage-overlay-fd62fdf166d581c6d2b3ef9c0fe541584f5effaa7812f4badcc3cc24df395cfc-merged.mount: Deactivated successfully. Nov 28 05:07:19 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9232a324cbb803ff5e7248639f834f9284fa50bf671d08fdfddeac4a08ecb36b-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:20 localhost podman[318987]: Nov 28 05:07:20 localhost podman[318987]: 2025-11-28 10:07:20.076362195 +0000 UTC m=+0.089661719 container create 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:07:20 localhost systemd[1]: Started libpod-conmon-749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d.scope. Nov 28 05:07:20 localhost podman[318987]: 2025-11-28 10:07:20.034506223 +0000 UTC m=+0.047805767 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:20 localhost systemd[1]: Started libcrun container. Nov 28 05:07:20 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33587a6373740309204ac909f2c16bba4ef78bf310718d28d2ec5f8f7d5dcf20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:20 localhost podman[318987]: 2025-11-28 10:07:20.156816239 +0000 UTC m=+0.170115763 container init 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:20 localhost podman[318987]: 2025-11-28 10:07:20.166970422 +0000 UTC m=+0.180269946 container start 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:07:20 localhost dnsmasq[319005]: started, version 2.85 cachesize 150 Nov 28 05:07:20 localhost dnsmasq[319005]: DNS service limited to local subnets Nov 28 05:07:20 localhost dnsmasq[319005]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:20 localhost dnsmasq[319005]: warning: no upstream servers configured Nov 28 05:07:20 localhost dnsmasq-dhcp[319005]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Nov 28 05:07:20 localhost dnsmasq[319005]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 0 addresses Nov 28 05:07:20 localhost dnsmasq-dhcp[319005]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:20 localhost dnsmasq-dhcp[319005]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e193 e193: 6 total, 6 up, 6 in Nov 28 05:07:20 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:20.477 261346 INFO neutron.agent.dhcp.agent [None req-496b6db1-97f8-4079-abb7-78122e8313d3 - - - - - -] DHCP configuration for ports {'e18fca2e-eaeb-40cf-9eb1-203ecf5b0aa2', '8646aa95-6463-44cd-8c34-1bec1705e23b', '8bc6a73d-610f-4f06-b515-26f3efcf46a4'} is completed#033[00m Nov 28 05:07:20 localhost dnsmasq[319005]: exiting on receipt of SIGTERM Nov 28 05:07:20 localhost podman[319023]: 2025-11-28 10:07:20.567913569 +0000 UTC m=+0.062643575 container kill 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:20 localhost systemd[1]: libpod-749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d.scope: Deactivated successfully. Nov 28 05:07:20 localhost podman[319037]: 2025-11-28 10:07:20.640558091 +0000 UTC m=+0.057727842 container died 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:20 localhost podman[319037]: 2025-11-28 10:07:20.674034945 +0000 UTC m=+0.091204646 container cleanup 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:07:20 localhost systemd[1]: libpod-conmon-749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d.scope: Deactivated successfully. Nov 28 05:07:20 localhost podman[319039]: 2025-11-28 10:07:20.729153996 +0000 UTC m=+0.136598087 container remove 749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Nov 28 05:07:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "74c605fe-3105-407e-80b8-d4fb6f7d4329", "format": "json"}]: dispatch Nov 28 05:07:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:20 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '74c605fe-3105-407e-80b8-d4fb6f7d4329' of type subvolume Nov 28 05:07:20 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:20.820+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '74c605fe-3105-407e-80b8-d4fb6f7d4329' of type subvolume Nov 28 05:07:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "74c605fe-3105-407e-80b8-d4fb6f7d4329", "force": true, "format": "json"}]: dispatch Nov 28 05:07:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, vol_name:cephfs) < "" Nov 28 05:07:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/74c605fe-3105-407e-80b8-d4fb6f7d4329'' moved to trashcan Nov 28 05:07:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:07:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:74c605fe-3105-407e-80b8-d4fb6f7d4329, vol_name:cephfs) < "" Nov 28 05:07:20 localhost systemd[1]: var-lib-containers-storage-overlay-33587a6373740309204ac909f2c16bba4ef78bf310718d28d2ec5f8f7d5dcf20-merged.mount: Deactivated successfully. Nov 28 05:07:20 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-749149ed15da240a27644b8b3bb546a3353809c26e676ff0d730aad5bc686b6d-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:21 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:07:21 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:07:21 localhost podman[319082]: 2025-11-28 10:07:21.001692929 +0000 UTC m=+0.060040854 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 05:07:21 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:07:21 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:21.384 2 INFO neutron.agent.securitygroups_rpc [None req-fe8ed995-ee4b-4312-80c7-fb60647feb81 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['235f4ca9-4e7e-483e-ba22-a609f7751fe8']#033[00m Nov 28 05:07:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e194 e194: 6 total, 6 up, 6 in Nov 28 05:07:21 localhost nova_compute[280168]: 2025-11-28 10:07:21.717 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:21 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:21.775 261346 INFO neutron.agent.linux.ip_lib [None req-4c1bfc47-5e8f-408e-865b-81dbc7756058 - - - - - -] Device tapd42f5d61-af cannot be used as it has no MAC address#033[00m Nov 28 05:07:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 192 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 18 KiB/s wr, 108 op/s Nov 28 05:07:21 localhost nova_compute[280168]: 2025-11-28 10:07:21.826 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:21 localhost kernel: device tapd42f5d61-af entered promiscuous mode Nov 28 05:07:21 localhost ovn_controller[152726]: 2025-11-28T10:07:21Z|00177|binding|INFO|Claiming lport d42f5d61-afe1-455f-b448-86993094b244 for this chassis. Nov 28 05:07:21 localhost ovn_controller[152726]: 2025-11-28T10:07:21Z|00178|binding|INFO|d42f5d61-afe1-455f-b448-86993094b244: Claiming unknown Nov 28 05:07:21 localhost NetworkManager[5965]: [1764324441.8361] manager: (tapd42f5d61-af): new Generic device (/org/freedesktop/NetworkManager/Devices/39) Nov 28 05:07:21 localhost nova_compute[280168]: 2025-11-28 10:07:21.841 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:21.848 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9fa2ef-906d-4c5f-8a61-e350b84d90cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=d42f5d61-afe1-455f-b448-86993094b244) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:21 localhost systemd-udevd[319116]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:07:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:21.851 158530 INFO neutron.agent.ovn.metadata.agent [-] Port d42f5d61-afe1-455f-b448-86993094b244 in datapath 744b5a82-3c5c-4b41-ba44-527244a209c4 bound to our chassis#033[00m Nov 28 05:07:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:21.853 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 744b5a82-3c5c-4b41-ba44-527244a209c4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:07:21 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:21.855 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[9d04ecd8-d706-49d0-8058-0c6c0b61ac67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:21 localhost nova_compute[280168]: 2025-11-28 10:07:21.892 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:21 localhost ovn_controller[152726]: 2025-11-28T10:07:21Z|00179|binding|INFO|Setting lport d42f5d61-afe1-455f-b448-86993094b244 ovn-installed in OVS Nov 28 05:07:21 localhost ovn_controller[152726]: 2025-11-28T10:07:21Z|00180|binding|INFO|Setting lport d42f5d61-afe1-455f-b448-86993094b244 up in Southbound Nov 28 05:07:21 localhost nova_compute[280168]: 2025-11-28 10:07:21.896 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:21 localhost nova_compute[280168]: 2025-11-28 10:07:21.953 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:22 localhost nova_compute[280168]: 2025-11-28 10:07:22.004 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e195 e195: 6 total, 6 up, 6 in Nov 28 05:07:22 localhost podman[319197]: Nov 28 05:07:22 localhost podman[319197]: 2025-11-28 10:07:22.839289106 +0000 UTC m=+0.093007212 container create 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:22 localhost systemd[1]: Started libpod-conmon-7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8.scope. Nov 28 05:07:22 localhost systemd[1]: Started libcrun container. Nov 28 05:07:22 localhost podman[319197]: 2025-11-28 10:07:22.791985925 +0000 UTC m=+0.045704021 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:22 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47429582b4db31382ce2dca59c66017594f0da5442226bc44ebbcd242cb78e55/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:22 localhost podman[319197]: 2025-11-28 10:07:22.902065564 +0000 UTC m=+0.155783640 container init 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:22 localhost podman[319197]: 2025-11-28 10:07:22.91004166 +0000 UTC m=+0.163759736 container start 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:07:22 localhost dnsmasq[319232]: started, version 2.85 cachesize 150 Nov 28 05:07:22 localhost dnsmasq[319232]: DNS service limited to local subnets Nov 28 05:07:22 localhost dnsmasq[319232]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:22 localhost dnsmasq[319232]: warning: no upstream servers configured Nov 28 05:07:22 localhost dnsmasq-dhcp[319232]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Nov 28 05:07:22 localhost dnsmasq-dhcp[319232]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:07:22 localhost dnsmasq[319232]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 0 addresses Nov 28 05:07:22 localhost dnsmasq-dhcp[319232]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:22 localhost dnsmasq-dhcp[319232]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:23 localhost nova_compute[280168]: 2025-11-28 10:07:23.022 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:23 localhost dnsmasq[319232]: exiting on receipt of SIGTERM Nov 28 05:07:23 localhost systemd[1]: libpod-7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8.scope: Deactivated successfully. Nov 28 05:07:23 localhost podman[319239]: 2025-11-28 10:07:23.06260614 +0000 UTC m=+0.126185297 container died 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:23 localhost podman[319239]: 2025-11-28 10:07:23.093579016 +0000 UTC m=+0.157158173 container cleanup 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:23 localhost podman[319252]: 2025-11-28 10:07:23.130542897 +0000 UTC m=+0.063507931 container cleanup 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Nov 28 05:07:23 localhost systemd[1]: libpod-conmon-7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8.scope: Deactivated successfully. Nov 28 05:07:23 localhost podman[319264]: 2025-11-28 10:07:23.191519709 +0000 UTC m=+0.081823476 container remove 7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125) Nov 28 05:07:23 localhost podman[319283]: Nov 28 05:07:23 localhost podman[319283]: 2025-11-28 10:07:23.289763202 +0000 UTC m=+0.080659401 container create e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:23 localhost systemd[1]: Started libpod-conmon-e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df.scope. Nov 28 05:07:23 localhost systemd[1]: Started libcrun container. Nov 28 05:07:23 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3cb155a93a6ece0fb95602d69cd611411b4d92a56342074acdd1cb9fb6b26/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:23 localhost podman[319283]: 2025-11-28 10:07:23.346183013 +0000 UTC m=+0.137079222 container init e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Nov 28 05:07:23 localhost podman[319283]: 2025-11-28 10:07:23.249143688 +0000 UTC m=+0.040039937 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:23 localhost podman[319283]: 2025-11-28 10:07:23.354254803 +0000 UTC m=+0.145151012 container start e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125) Nov 28 05:07:23 localhost dnsmasq[319301]: started, version 2.85 cachesize 150 Nov 28 05:07:23 localhost dnsmasq[319301]: DNS service limited to local subnets Nov 28 05:07:23 localhost dnsmasq[319301]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:23 localhost dnsmasq[319301]: warning: no upstream servers configured Nov 28 05:07:23 localhost dnsmasq-dhcp[319301]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:07:23 localhost dnsmasq[319301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 0 addresses Nov 28 05:07:23 localhost dnsmasq-dhcp[319301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:23 localhost dnsmasq-dhcp[319301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 192 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 17 KiB/s wr, 100 op/s Nov 28 05:07:23 localhost systemd[1]: var-lib-containers-storage-overlay-47429582b4db31382ce2dca59c66017594f0da5442226bc44ebbcd242cb78e55-merged.mount: Deactivated successfully. Nov 28 05:07:23 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d00eb47706247a4399d8c78950a1e49d730ce95b6c195feb49b94727c9561a8-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:24 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:24.009 261346 INFO neutron.agent.dhcp.agent [None req-641e10e9-3817-4d29-b950-2fe355e91ffa - - - - - -] DHCP configuration for ports {'e18fca2e-eaeb-40cf-9eb1-203ecf5b0aa2', '8646aa95-6463-44cd-8c34-1bec1705e23b', '8bc6a73d-610f-4f06-b515-26f3efcf46a4'} is completed#033[00m Nov 28 05:07:24 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:24.092 2 INFO neutron.agent.securitygroups_rpc [None req-b1e53ab7-c922-4511-9eea-36891b374394 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['ef6c27ab-7008-4940-88ab-f495a3348997', '235f4ca9-4e7e-483e-ba22-a609f7751fe8', 'ad28b9ca-0164-4a23-9923-7d61ac565e84']#033[00m Nov 28 05:07:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "84a40f08-8b6b-4686-8d05-33a3e9292f4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:07:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, vol_name:cephfs) < "" Nov 28 05:07:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/84a40f08-8b6b-4686-8d05-33a3e9292f4f/.meta.tmp' Nov 28 05:07:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/84a40f08-8b6b-4686-8d05-33a3e9292f4f/.meta.tmp' to config b'/volumes/_nogroup/84a40f08-8b6b-4686-8d05-33a3e9292f4f/.meta' Nov 28 05:07:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, vol_name:cephfs) < "" Nov 28 05:07:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "84a40f08-8b6b-4686-8d05-33a3e9292f4f", "format": "json"}]: dispatch Nov 28 05:07:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, vol_name:cephfs) < "" Nov 28 05:07:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, vol_name:cephfs) < "" Nov 28 05:07:24 localhost nova_compute[280168]: 2025-11-28 10:07:24.345 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:24 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:24.422 261346 INFO neutron.agent.dhcp.agent [None req-58ebe808-7899-4368-bd27-1fa7f16b1e10 - - - - - -] DHCP configuration for ports {'530dc798-1c6a-4c38-a11f-57f3818e5561', '636c21fa-d6bd-405e-95ed-d59498827d6f'} is completed#033[00m Nov 28 05:07:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e196 e196: 6 total, 6 up, 6 in Nov 28 05:07:24 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:24.754 2 INFO neutron.agent.securitygroups_rpc [None req-ded8a162-fe49-472e-9107-11e07cb8573a e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['ef6c27ab-7008-4940-88ab-f495a3348997', 'ad28b9ca-0164-4a23-9923-7d61ac565e84']#033[00m Nov 28 05:07:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:07:24 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:07:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:07:24 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:07:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:07:24 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev e28ff6ca-b8a9-45c3-a327-baf42b0e54aa (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:07:24 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev e28ff6ca-b8a9-45c3-a327-baf42b0e54aa (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:07:24 localhost ceph-mgr[286188]: [progress INFO root] Completed event e28ff6ca-b8a9-45c3-a327-baf42b0e54aa (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:07:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:07:24 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:07:25 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Nov 28 05:07:25 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:25.422 2 INFO neutron.agent.securitygroups_rpc [None req-84883400-a0bd-45dd-a8ae-3bc8b417b162 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['acf02bd6-8fdb-4bdf-b655-c11d3c48057a']#033[00m Nov 28 05:07:25 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:25.470 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:25Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3f096e93-c3cf-440a-8cda-fd3f17a679fb, ip_allocation=immediate, mac_address=fa:16:3e:bd:2b:32, name=tempest-PortsTestJSON-1518413586, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:05:56Z, description=, dns_domain=, id=744b5a82-3c5c-4b41-ba44-527244a209c4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-935184943, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32831, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2138, status=ACTIVE, subnets=['4bf409b4-2136-4411-a9e7-978df7f2f500'], tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:20Z, vlan_transparent=None, network_id=744b5a82-3c5c-4b41-ba44-527244a209c4, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['acf02bd6-8fdb-4bdf-b655-c11d3c48057a'], standard_attr_id=2522, status=DOWN, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:25Z on network 744b5a82-3c5c-4b41-ba44-527244a209c4#033[00m Nov 28 05:07:25 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:25.473 2 INFO neutron.agent.securitygroups_rpc [None req-0fb934e7-9ad4-4c2e-8ef8-4b9c21b34e7a 6b80a7d6f65f4ebe8363ccaae26f3e87 ae10569a38284f298c961498da620c5f - - default default] Security group member updated ['c5eee24b-0bed-4035-a2ab-e6c531c94e43']#033[00m Nov 28 05:07:25 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:07:25 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:07:25 localhost dnsmasq[319301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 1 addresses Nov 28 05:07:25 localhost dnsmasq-dhcp[319301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:25 localhost podman[319423]: 2025-11-28 10:07:25.726802683 +0000 UTC m=+0.059630212 container kill e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:25 localhost dnsmasq-dhcp[319301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v387: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 192 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 80 op/s Nov 28 05:07:25 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:07:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:07:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:07:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:26.102 261346 INFO neutron.agent.dhcp.agent [None req-7342aeab-f0ec-4c04-ae76-2a91133c8b33 - - - - - -] DHCP configuration for ports {'3f096e93-c3cf-440a-8cda-fd3f17a679fb'} is completed#033[00m Nov 28 05:07:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:26.158 261346 INFO neutron.agent.linux.ip_lib [None req-59202482-0395-4fc3-b670-eb0f5cd989b9 - - - - - -] Device tap0f9c27e4-dc cannot be used as it has no MAC address#033[00m Nov 28 05:07:26 localhost systemd[1]: tmp-crun.ssnBFK.mount: Deactivated successfully. Nov 28 05:07:26 localhost podman[319460]: 2025-11-28 10:07:26.167128685 +0000 UTC m=+0.065309777 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.6) Nov 28 05:07:26 localhost nova_compute[280168]: 2025-11-28 10:07:26.176 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:26 localhost kernel: device tap0f9c27e4-dc entered promiscuous mode Nov 28 05:07:26 localhost NetworkManager[5965]: [1764324446.1833] manager: (tap0f9c27e4-dc): new Generic device (/org/freedesktop/NetworkManager/Devices/40) Nov 28 05:07:26 localhost ovn_controller[152726]: 2025-11-28T10:07:26Z|00181|binding|INFO|Claiming lport 0f9c27e4-dc5b-458a-84e7-59a6845be341 for this chassis. Nov 28 05:07:26 localhost ovn_controller[152726]: 2025-11-28T10:07:26Z|00182|binding|INFO|0f9c27e4-dc5b-458a-84e7-59a6845be341: Claiming unknown Nov 28 05:07:26 localhost nova_compute[280168]: 2025-11-28 10:07:26.186 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:26 localhost systemd-udevd[319501]: Network interface NamePolicy= disabled on kernel command line. Nov 28 05:07:26 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:26.206 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe58:df08/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-bb75b6d0-46f7-4ff4-b977-20963925f011', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb75b6d0-46f7-4ff4-b977-20963925f011', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae10569a38284f298c961498da620c5f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19295739-89e1-4341-a9f7-bf31d43c2d95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f9c27e4-dc5b-458a-84e7-59a6845be341) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:26 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:26.207 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 0f9c27e4-dc5b-458a-84e7-59a6845be341 in datapath bb75b6d0-46f7-4ff4-b977-20963925f011 bound to our chassis#033[00m Nov 28 05:07:26 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:26.209 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port 64f2d2fb-0269-4c37-9cd4-8777ae8910b3 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:07:26 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:26.209 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb75b6d0-46f7-4ff4-b977-20963925f011, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:26 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:26.209 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[99bec901-1281-4f86-8478-2bd8bc14ec48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost nova_compute[280168]: 2025-11-28 10:07:26.220 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost ovn_controller[152726]: 2025-11-28T10:07:26Z|00183|binding|INFO|Setting lport 0f9c27e4-dc5b-458a-84e7-59a6845be341 ovn-installed in OVS Nov 28 05:07:26 localhost ovn_controller[152726]: 2025-11-28T10:07:26Z|00184|binding|INFO|Setting lport 0f9c27e4-dc5b-458a-84e7-59a6845be341 up in Southbound Nov 28 05:07:26 localhost nova_compute[280168]: 2025-11-28 10:07:26.222 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost journal[228057]: ethtool ioctl error on tap0f9c27e4-dc: No such device Nov 28 05:07:26 localhost nova_compute[280168]: 2025-11-28 10:07:26.247 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:26 localhost podman[319460]: 2025-11-28 10:07:26.258842377 +0000 UTC m=+0.157023449 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc.) Nov 28 05:07:26 localhost nova_compute[280168]: 2025-11-28 10:07:26.270 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:26 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:07:26 localhost podman[319532]: Nov 28 05:07:26 localhost podman[319532]: 2025-11-28 10:07:26.314373191 +0000 UTC m=+0.061059606 container create b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:26 localhost systemd[1]: Started libpod-conmon-b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948.scope. Nov 28 05:07:26 localhost systemd[1]: Started libcrun container. Nov 28 05:07:26 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a40302d152a1d4acd8c242f619baa5eea043f1a1ad2914ca314bd0285d34a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:26 localhost podman[319532]: 2025-11-28 10:07:26.380640676 +0000 UTC m=+0.127327081 container init b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:07:26 localhost podman[319532]: 2025-11-28 10:07:26.282989022 +0000 UTC m=+0.029675447 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:26 localhost podman[319532]: 2025-11-28 10:07:26.386609321 +0000 UTC m=+0.133295716 container start b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:07:26 localhost dnsmasq[319559]: started, version 2.85 cachesize 150 Nov 28 05:07:26 localhost dnsmasq[319559]: DNS service limited to local subnets Nov 28 05:07:26 localhost dnsmasq[319559]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:26 localhost dnsmasq[319559]: warning: no upstream servers configured Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Nov 28 05:07:26 localhost dnsmasq[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 1 addresses Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:26.434 261346 INFO neutron.agent.dhcp.agent [None req-223da476-0ca2-44ac-b2f9-b135e3874901 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:21Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3d039e4d-f111-4c85-a4bc-bc275c485ad6, ip_allocation=immediate, mac_address=fa:16:3e:a9:78:64, name=tempest-PortsIpV6TestJSON-342618759, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:06:07Z, description=, dns_domain=, id=719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1946008216, port_security_enabled=True, project_id=8462a4a9a313405e8fd212f9ec4a0c92, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=49828, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2226, status=ACTIVE, subnets=['d9bf5441-d2dd-4350-bfbf-90c18e4a1028', 'eb032588-3453-4d13-aca7-c97dd8bc87f7'], tags=[], tenant_id=8462a4a9a313405e8fd212f9ec4a0c92, updated_at=2025-11-28T10:07:19Z, vlan_transparent=None, network_id=719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, port_security_enabled=True, project_id=8462a4a9a313405e8fd212f9ec4a0c92, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['235f4ca9-4e7e-483e-ba22-a609f7751fe8'], standard_attr_id=2508, status=DOWN, tags=[], tenant_id=8462a4a9a313405e8fd212f9ec4a0c92, updated_at=2025-11-28T10:07:21Z on network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb#033[00m Nov 28 05:07:26 localhost dnsmasq[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 1 addresses Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:26 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:26 localhost podman[319587]: 2025-11-28 10:07:26.618895831 +0000 UTC m=+0.064944026 container kill b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:26.767 261346 INFO neutron.agent.dhcp.agent [None req-a4f455cb-1187-4a22-ace0-40ae01a8e64a - - - - - -] DHCP configuration for ports {'e18fca2e-eaeb-40cf-9eb1-203ecf5b0aa2', '3d039e4d-f111-4c85-a4bc-bc275c485ad6', '8646aa95-6463-44cd-8c34-1bec1705e23b', '8bc6a73d-610f-4f06-b515-26f3efcf46a4'} is completed#033[00m Nov 28 05:07:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:26.810 261346 INFO neutron.agent.dhcp.agent [None req-223da476-0ca2-44ac-b2f9-b135e3874901 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:21Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3d039e4d-f111-4c85-a4bc-bc275c485ad6, ip_allocation=immediate, mac_address=fa:16:3e:a9:78:64, name=tempest-PortsIpV6TestJSON-1465794594, network_id=719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, port_security_enabled=True, project_id=8462a4a9a313405e8fd212f9ec4a0c92, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['ad28b9ca-0164-4a23-9923-7d61ac565e84', 'ef6c27ab-7008-4940-88ab-f495a3348997'], standard_attr_id=2508, status=DOWN, tags=[], tenant_id=8462a4a9a313405e8fd212f9ec4a0c92, updated_at=2025-11-28T10:07:22Z on network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb#033[00m Nov 28 05:07:26 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:26.887 261346 INFO neutron.agent.dhcp.agent [None req-68d4b832-3a93-4d0a-a8a2-32ca37082701 - - - - - -] DHCP configuration for ports {'3d039e4d-f111-4c85-a4bc-bc275c485ad6'} is completed#033[00m Nov 28 05:07:26 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:07:27 localhost dnsmasq[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 1 addresses Nov 28 05:07:27 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:27 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:27 localhost podman[319645]: 2025-11-28 10:07:27.13584934 +0000 UTC m=+0.132883323 container kill b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:27 localhost podman[319663]: Nov 28 05:07:27 localhost podman[319663]: 2025-11-28 10:07:27.165130413 +0000 UTC m=+0.076270245 container create 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:07:27 localhost systemd[1]: Started libpod-conmon-5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c.scope. Nov 28 05:07:27 localhost systemd[1]: Started libcrun container. Nov 28 05:07:27 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1910b9af47a7dc6a481f7f88c43c164511ef0e76206f1154477c85aff6d7d500/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:27 localhost podman[319663]: 2025-11-28 10:07:27.232850224 +0000 UTC m=+0.143990056 container init 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 05:07:27 localhost podman[319663]: 2025-11-28 10:07:27.13392119 +0000 UTC m=+0.045060992 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:27 localhost podman[319663]: 2025-11-28 10:07:27.245749292 +0000 UTC m=+0.156889124 container start 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 28 05:07:27 localhost dnsmasq[319688]: started, version 2.85 cachesize 150 Nov 28 05:07:27 localhost dnsmasq[319688]: DNS service limited to local subnets Nov 28 05:07:27 localhost dnsmasq[319688]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:27 localhost dnsmasq[319688]: warning: no upstream servers configured Nov 28 05:07:27 localhost dnsmasq-dhcp[319688]: DHCPv6, static leases only on 2001:db8::, lease time 1d Nov 28 05:07:27 localhost dnsmasq[319688]: read /var/lib/neutron/dhcp/bb75b6d0-46f7-4ff4-b977-20963925f011/addn_hosts - 0 addresses Nov 28 05:07:27 localhost dnsmasq-dhcp[319688]: read /var/lib/neutron/dhcp/bb75b6d0-46f7-4ff4-b977-20963925f011/host Nov 28 05:07:27 localhost dnsmasq-dhcp[319688]: read /var/lib/neutron/dhcp/bb75b6d0-46f7-4ff4-b977-20963925f011/opts Nov 28 05:07:27 localhost dnsmasq[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 0 addresses Nov 28 05:07:27 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:27 localhost podman[319710]: 2025-11-28 10:07:27.438187593 +0000 UTC m=+0.040317365 container kill b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS) Nov 28 05:07:27 localhost dnsmasq-dhcp[319559]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "84a40f08-8b6b-4686-8d05-33a3e9292f4f", "format": "json"}]: dispatch Nov 28 05:07:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:27 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84a40f08-8b6b-4686-8d05-33a3e9292f4f' of type subvolume Nov 28 05:07:27 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:27.460+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '84a40f08-8b6b-4686-8d05-33a3e9292f4f' of type subvolume Nov 28 05:07:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "84a40f08-8b6b-4686-8d05-33a3e9292f4f", "force": true, "format": "json"}]: dispatch Nov 28 05:07:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, vol_name:cephfs) < "" Nov 28 05:07:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/84a40f08-8b6b-4686-8d05-33a3e9292f4f'' moved to trashcan Nov 28 05:07:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:07:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:84a40f08-8b6b-4686-8d05-33a3e9292f4f, vol_name:cephfs) < "" Nov 28 05:07:27 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:27.474 261346 INFO neutron.agent.dhcp.agent [None req-392bae1e-7b98-46ed-a908-87eca6ab4af9 - - - - - -] DHCP configuration for ports {'3d039e4d-f111-4c85-a4bc-bc275c485ad6', '16cc2844-5a40-4594-9c31-bb5eedf99c06'} is completed#033[00m Nov 28 05:07:27 localhost openstack_network_exporter[240973]: ERROR 10:07:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:07:27 localhost openstack_network_exporter[240973]: ERROR 10:07:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:07:27 localhost openstack_network_exporter[240973]: Nov 28 05:07:27 localhost openstack_network_exporter[240973]: ERROR 10:07:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:07:27 localhost openstack_network_exporter[240973]: ERROR 10:07:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:07:27 localhost openstack_network_exporter[240973]: ERROR 10:07:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:07:27 localhost openstack_network_exporter[240973]: Nov 28 05:07:27 localhost dnsmasq[319688]: exiting on receipt of SIGTERM Nov 28 05:07:27 localhost podman[319744]: 2025-11-28 10:07:27.632703607 +0000 UTC m=+0.062101658 container kill 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true) Nov 28 05:07:27 localhost systemd[1]: libpod-5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c.scope: Deactivated successfully. Nov 28 05:07:27 localhost podman[319761]: 2025-11-28 10:07:27.686866629 +0000 UTC m=+0.041211483 container died 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:07:27 localhost podman[319761]: 2025-11-28 10:07:27.710187749 +0000 UTC m=+0.064532533 container cleanup 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:27 localhost systemd[1]: libpod-conmon-5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c.scope: Deactivated successfully. Nov 28 05:07:27 localhost systemd[1]: var-lib-containers-storage-overlay-1910b9af47a7dc6a481f7f88c43c164511ef0e76206f1154477c85aff6d7d500-merged.mount: Deactivated successfully. Nov 28 05:07:27 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:27 localhost podman[319762]: 2025-11-28 10:07:27.758142369 +0000 UTC m=+0.111969007 container remove 5f528661078fe39cce27d38eff7a9a22891918d00b247e0af7a5cd9ad5dfec9c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bb75b6d0-46f7-4ff4-b977-20963925f011, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:07:27 localhost nova_compute[280168]: 2025-11-28 10:07:27.769 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:27 localhost kernel: device tap0f9c27e4-dc left promiscuous mode Nov 28 05:07:27 localhost ovn_controller[152726]: 2025-11-28T10:07:27Z|00185|binding|INFO|Releasing lport 0f9c27e4-dc5b-458a-84e7-59a6845be341 from this chassis (sb_readonly=0) Nov 28 05:07:27 localhost ovn_controller[152726]: 2025-11-28T10:07:27Z|00186|binding|INFO|Setting lport 0f9c27e4-dc5b-458a-84e7-59a6845be341 down in Southbound Nov 28 05:07:27 localhost nova_compute[280168]: 2025-11-28 10:07:27.784 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:27.789 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe58:df08/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-bb75b6d0-46f7-4ff4-b977-20963925f011', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb75b6d0-46f7-4ff4-b977-20963925f011', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae10569a38284f298c961498da620c5f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=19295739-89e1-4341-a9f7-bf31d43c2d95, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0f9c27e4-dc5b-458a-84e7-59a6845be341) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:27.789 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 0f9c27e4-dc5b-458a-84e7-59a6845be341 in datapath bb75b6d0-46f7-4ff4-b977-20963925f011 unbound from our chassis#033[00m Nov 28 05:07:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:27.791 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb75b6d0-46f7-4ff4-b977-20963925f011, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:27 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:27.791 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6655c9-58bd-49f6-804d-104c0a0c35fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 213 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 651 KiB/s rd, 4.0 MiB/s wr, 230 op/s Nov 28 05:07:27 localhost systemd[1]: tmp-crun.kzKinO.mount: Deactivated successfully. Nov 28 05:07:27 localhost podman[319807]: 2025-11-28 10:07:27.865770232 +0000 UTC m=+0.062263083 container kill b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:27 localhost dnsmasq[319559]: exiting on receipt of SIGTERM Nov 28 05:07:27 localhost systemd[1]: libpod-b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948.scope: Deactivated successfully. Nov 28 05:07:27 localhost podman[319820]: 2025-11-28 10:07:27.944305227 +0000 UTC m=+0.063612135 container died b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:27 localhost podman[319820]: 2025-11-28 10:07:27.980261526 +0000 UTC m=+0.099568394 container cleanup b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 05:07:27 localhost systemd[1]: libpod-conmon-b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948.scope: Deactivated successfully. Nov 28 05:07:28 localhost podman[319822]: 2025-11-28 10:07:28.023030637 +0000 UTC m=+0.133608706 container remove b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:28 localhost nova_compute[280168]: 2025-11-28 10:07:28.070 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:28 localhost dnsmasq[319301]: exiting on receipt of SIGTERM Nov 28 05:07:28 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:28.219 261346 INFO neutron.agent.dhcp.agent [None req-f32e5221-32bb-49e5-8900-cc184f56a431 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:28 localhost systemd[1]: libpod-e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df.scope: Deactivated successfully. Nov 28 05:07:28 localhost podman[319866]: 2025-11-28 10:07:28.221270656 +0000 UTC m=+0.064790191 container kill e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:07:28 localhost podman[319883]: 2025-11-28 10:07:28.291552266 +0000 UTC m=+0.053025448 container died e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:07:28 localhost podman[319883]: 2025-11-28 10:07:28.317979002 +0000 UTC m=+0.079452164 container cleanup e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:28 localhost systemd[1]: libpod-conmon-e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df.scope: Deactivated successfully. Nov 28 05:07:28 localhost podman[319885]: 2025-11-28 10:07:28.368384967 +0000 UTC m=+0.122774591 container remove e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:28 localhost systemd[1]: var-lib-containers-storage-overlay-d1a40302d152a1d4acd8c242f619baa5eea043f1a1ad2914ca314bd0285d34a6-merged.mount: Deactivated successfully. Nov 28 05:07:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b64edd8db7d47901ad1bacd35e70a1e47dcd67493ee12a17dc2d4506c9cef948-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:28 localhost systemd[1]: run-netns-qdhcp\x2dbb75b6d0\x2d46f7\x2d4ff4\x2db977\x2d20963925f011.mount: Deactivated successfully. Nov 28 05:07:28 localhost systemd[1]: var-lib-containers-storage-overlay-8ff3cb155a93a6ece0fb95602d69cd611411b4d92a56342074acdd1cb9fb6b26-merged.mount: Deactivated successfully. Nov 28 05:07:28 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e023ab45866580f82c7d33d531f7dc939662032f69b1cde9a1a731e04d9714df-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:28 localhost podman[239012]: time="2025-11-28T10:07:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:07:28 localhost podman[239012]: @ - - [28/Nov/2025:10:07:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158154 "" "Go-http-client/1.1" Nov 28 05:07:28 localhost podman[319950]: Nov 28 05:07:29 localhost podman[319950]: 2025-11-28 10:07:29.008220639 +0000 UTC m=+0.143855972 container create 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125) Nov 28 05:07:29 localhost systemd[1]: Started libpod-conmon-8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54.scope. Nov 28 05:07:29 localhost podman[319950]: 2025-11-28 10:07:28.966518251 +0000 UTC m=+0.102153584 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:29 localhost podman[239012]: @ - - [28/Nov/2025:10:07:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19695 "" "Go-http-client/1.1" Nov 28 05:07:29 localhost systemd[1]: Started libcrun container. Nov 28 05:07:29 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7331610e5b3e756e96e7e41d0f681a4e4ee10db4858c583003c7c90264e22183/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:29 localhost podman[319950]: 2025-11-28 10:07:29.111824427 +0000 UTC m=+0.247459740 container init 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:07:29 localhost podman[319950]: 2025-11-28 10:07:29.122116275 +0000 UTC m=+0.257751618 container start 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:29 localhost dnsmasq[319982]: started, version 2.85 cachesize 150 Nov 28 05:07:29 localhost dnsmasq[319982]: DNS service limited to local subnets Nov 28 05:07:29 localhost dnsmasq[319982]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:29 localhost dnsmasq[319982]: warning: no upstream servers configured Nov 28 05:07:29 localhost dnsmasq-dhcp[319982]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d Nov 28 05:07:29 localhost dnsmasq-dhcp[319982]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d Nov 28 05:07:29 localhost dnsmasq[319982]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/addn_hosts - 0 addresses Nov 28 05:07:29 localhost dnsmasq-dhcp[319982]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/host Nov 28 05:07:29 localhost dnsmasq-dhcp[319982]: read /var/lib/neutron/dhcp/719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb/opts Nov 28 05:07:29 localhost nova_compute[280168]: 2025-11-28 10:07:29.381 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:29 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:29.668 261346 INFO neutron.agent.dhcp.agent [None req-52e93256-15f5-47cc-a32e-553ffe614e2c - - - - - -] DHCP configuration for ports {'e18fca2e-eaeb-40cf-9eb1-203ecf5b0aa2', '8646aa95-6463-44cd-8c34-1bec1705e23b', '8bc6a73d-610f-4f06-b515-26f3efcf46a4'} is completed#033[00m Nov 28 05:07:29 localhost dnsmasq[319982]: exiting on receipt of SIGTERM Nov 28 05:07:29 localhost podman[320011]: 2025-11-28 10:07:29.712721557 +0000 UTC m=+0.060920322 container kill 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:07:29 localhost systemd[1]: libpod-8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54.scope: Deactivated successfully. Nov 28 05:07:29 localhost podman[320025]: 2025-11-28 10:07:29.782748719 +0000 UTC m=+0.055018250 container died 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:07:29 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:29 localhost podman[320025]: 2025-11-28 10:07:29.815686916 +0000 UTC m=+0.087956417 container cleanup 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS) Nov 28 05:07:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 213 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 466 KiB/s rd, 3.1 MiB/s wr, 124 op/s Nov 28 05:07:29 localhost systemd[1]: libpod-conmon-8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54.scope: Deactivated successfully. Nov 28 05:07:29 localhost podman[320027]: 2025-11-28 10:07:29.861975584 +0000 UTC m=+0.127395853 container remove 8c38f521b1dcb7ce646704ab260d0f5ca16b6aaab35e78128688f13d91893f54 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:07:29 localhost nova_compute[280168]: 2025-11-28 10:07:29.875 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:29 localhost ovn_controller[152726]: 2025-11-28T10:07:29Z|00187|binding|INFO|Releasing lport 8bc6a73d-610f-4f06-b515-26f3efcf46a4 from this chassis (sb_readonly=0) Nov 28 05:07:29 localhost ovn_controller[152726]: 2025-11-28T10:07:29Z|00188|binding|INFO|Setting lport 8bc6a73d-610f-4f06-b515-26f3efcf46a4 down in Southbound Nov 28 05:07:29 localhost kernel: device tap8bc6a73d-61 left promiscuous mode Nov 28 05:07:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:29.885 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8:0:2::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8462a4a9a313405e8fd212f9ec4a0c92', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe1e7b19-836c-4f4d-9811-92d20be8712f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=8bc6a73d-610f-4f06-b515-26f3efcf46a4) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:29.887 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 8bc6a73d-610f-4f06-b515-26f3efcf46a4 in datapath 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb unbound from our chassis#033[00m Nov 28 05:07:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:29.889 158530 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 719e8b2f-3827-4b26-9c22-6bd0f7ed7ceb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m Nov 28 05:07:29 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:29.889 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[140de10c-a25f-40f2-a1eb-a45d59d9840f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:29 localhost nova_compute[280168]: 2025-11-28 10:07:29.900 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:30 localhost podman[320076]: Nov 28 05:07:30 localhost podman[320076]: 2025-11-28 10:07:30.024650616 +0000 UTC m=+0.088494793 container create 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 05:07:30 localhost systemd[1]: Started libpod-conmon-78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b.scope. Nov 28 05:07:30 localhost systemd[1]: Started libcrun container. Nov 28 05:07:30 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8feab9915cd549e508973929d606dec6edf6c2c66c5c9a9b78239bf5e744001b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:30 localhost podman[320076]: 2025-11-28 10:07:30.080299044 +0000 UTC m=+0.144143221 container init 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 05:07:30 localhost podman[320076]: 2025-11-28 10:07:29.981157984 +0000 UTC m=+0.045002241 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:30 localhost podman[320076]: 2025-11-28 10:07:30.08926154 +0000 UTC m=+0.153105717 container start 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:30 localhost dnsmasq[320095]: started, version 2.85 cachesize 150 Nov 28 05:07:30 localhost dnsmasq[320095]: DNS service limited to local subnets Nov 28 05:07:30 localhost dnsmasq[320095]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:30 localhost dnsmasq[320095]: warning: no upstream servers configured Nov 28 05:07:30 localhost dnsmasq-dhcp[320095]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 28 05:07:30 localhost dnsmasq-dhcp[320095]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:07:30 localhost dnsmasq[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 1 addresses Nov 28 05:07:30 localhost dnsmasq-dhcp[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:30 localhost dnsmasq-dhcp[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:30 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:30.298 2 INFO neutron.agent.securitygroups_rpc [None req-bb1d0f4f-4080-47c2-b71c-a5aaec3a62e2 e32848e36ae94f66ae634ff4d7716d6f 8462a4a9a313405e8fd212f9ec4a0c92 - - default default] Security group member updated ['78343c03-098f-4faf-a880-2814fe3611d6']#033[00m Nov 28 05:07:30 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:30.398 261346 INFO neutron.agent.dhcp.agent [None req-0061627a-68e5-4249-8609-95796237c4aa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:30 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:30.399 261346 INFO neutron.agent.dhcp.agent [None req-0061627a-68e5-4249-8609-95796237c4aa - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e197 e197: 6 total, 6 up, 6 in Nov 28 05:07:30 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:30.581 261346 INFO neutron.agent.dhcp.agent [None req-61a6a753-713e-48a8-bc33-2d6f6814306a - - - - - -] DHCP configuration for ports {'530dc798-1c6a-4c38-a11f-57f3818e5561', 'd42f5d61-afe1-455f-b448-86993094b244', '3f096e93-c3cf-440a-8cda-fd3f17a679fb', '636c21fa-d6bd-405e-95ed-d59498827d6f'} is completed#033[00m Nov 28 05:07:30 localhost systemd[1]: var-lib-containers-storage-overlay-7331610e5b3e756e96e7e41d0f681a4e4ee10db4858c583003c7c90264e22183-merged.mount: Deactivated successfully. Nov 28 05:07:30 localhost systemd[1]: run-netns-qdhcp\x2d719e8b2f\x2d3827\x2d4b26\x2d9c22\x2d6bd0f7ed7ceb.mount: Deactivated successfully. Nov 28 05:07:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "3715b589-30b5-48c3-ae35-bbb103548e98", "format": "json"}]: dispatch Nov 28 05:07:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3715b589-30b5-48c3-ae35-bbb103548e98, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3715b589-30b5-48c3-ae35-bbb103548e98, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:31.395 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3b:02 10.100.0.19 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9fa2ef-906d-4c5f-8a61-e350b84d90cf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=636c21fa-d6bd-405e-95ed-d59498827d6f) old=Port_Binding(mac=['fa:16:3e:af:3b:02 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:31.397 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 636c21fa-d6bd-405e-95ed-d59498827d6f in datapath 744b5a82-3c5c-4b41-ba44-527244a209c4 updated#033[00m Nov 28 05:07:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:31.401 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port a03547c5-d094-4489-b8d5-6b024d72dbea IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:07:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:31.401 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 744b5a82-3c5c-4b41-ba44-527244a209c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:31 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:31.402 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[c03a809a-5b82-46c4-94e9-bc0af302abbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:31 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:31.542 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 225 MiB data, 1022 MiB used, 41 GiB / 42 GiB avail; 546 KiB/s rd, 3.2 MiB/s wr, 172 op/s Nov 28 05:07:31 localhost nova_compute[280168]: 2025-11-28 10:07:31.977 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:32 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:32.631 2 INFO neutron.agent.securitygroups_rpc [None req-b1e499b5-5b30-44cb-89ef-2d90dabf973f 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['b5d46958-1542-44c0-a82a-37e69acb7089', 'acf02bd6-8fdb-4bdf-b655-c11d3c48057a']#033[00m Nov 28 05:07:32 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:32.656 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:25Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3f096e93-c3cf-440a-8cda-fd3f17a679fb, ip_allocation=immediate, mac_address=fa:16:3e:bd:2b:32, name=tempest-PortsTestJSON-1565841995, network_id=744b5a82-3c5c-4b41-ba44-527244a209c4, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['b5d46958-1542-44c0-a82a-37e69acb7089'], standard_attr_id=2522, status=DOWN, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:32Z on network 744b5a82-3c5c-4b41-ba44-527244a209c4#033[00m Nov 28 05:07:32 localhost dnsmasq-dhcp[320095]: DHCPRELEASE(tapd42f5d61-af) 10.100.0.5 fa:16:3e:bd:2b:32 Nov 28 05:07:33 localhost nova_compute[280168]: 2025-11-28 10:07:33.109 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:33 localhost dnsmasq[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 1 addresses Nov 28 05:07:33 localhost dnsmasq-dhcp[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:33 localhost podman[320114]: 2025-11-28 10:07:33.248275019 +0000 UTC m=+0.064948026 container kill 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:33 localhost dnsmasq-dhcp[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:33 localhost systemd[1]: tmp-crun.IxeUIu.mount: Deactivated successfully. Nov 28 05:07:33 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:33.598 2 INFO neutron.agent.securitygroups_rpc [None req-7e9cce24-3864-452e-838f-0b8e85be3343 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['b5d46958-1542-44c0-a82a-37e69acb7089']#033[00m Nov 28 05:07:33 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:33.616 261346 INFO neutron.agent.dhcp.agent [None req-a199181a-4586-4beb-b947-620611dc04f7 - - - - - -] DHCP configuration for ports {'3f096e93-c3cf-440a-8cda-fd3f17a679fb'} is completed#033[00m Nov 28 05:07:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v392: 177 pgs: 177 active+clean; 225 MiB data, 1022 MiB used, 41 GiB / 42 GiB avail; 478 KiB/s rd, 2.8 MiB/s wr, 151 op/s Nov 28 05:07:34 localhost nova_compute[280168]: 2025-11-28 10:07:34.427 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:34 localhost dnsmasq[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 0 addresses Nov 28 05:07:34 localhost dnsmasq-dhcp[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:34 localhost dnsmasq-dhcp[320095]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:34 localhost podman[320151]: 2025-11-28 10:07:34.637218424 +0000 UTC m=+0.060297453 container kill 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:07:34 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "3715b589-30b5-48c3-ae35-bbb103548e98_18a37cfd-0766-4bf4-885c-08ab764ad956", "force": true, "format": "json"}]: dispatch Nov 28 05:07:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3715b589-30b5-48c3-ae35-bbb103548e98_18a37cfd-0766-4bf4-885c-08ab764ad956, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:07:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:07:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:07:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:07:35 localhost dnsmasq[320095]: exiting on receipt of SIGTERM Nov 28 05:07:35 localhost podman[320189]: 2025-11-28 10:07:35.708057231 +0000 UTC m=+0.065096920 container kill 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:07:35 localhost systemd[1]: libpod-78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b.scope: Deactivated successfully. Nov 28 05:07:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:07:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:07:35 localhost podman[320202]: 2025-11-28 10:07:35.781631162 +0000 UTC m=+0.062120788 container died 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:35 localhost podman[320202]: 2025-11-28 10:07:35.814014572 +0000 UTC m=+0.094504158 container cleanup 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:35 localhost systemd[1]: libpod-conmon-78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b.scope: Deactivated successfully. Nov 28 05:07:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v393: 177 pgs: 177 active+clean; 225 MiB data, 1022 MiB used, 41 GiB / 42 GiB avail; 437 KiB/s rd, 2.6 MiB/s wr, 138 op/s Nov 28 05:07:35 localhost podman[320204]: 2025-11-28 10:07:35.861512149 +0000 UTC m=+0.133214954 container remove 78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3715b589-30b5-48c3-ae35-bbb103548e98_18a37cfd-0766-4bf4-885c-08ab764ad956, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "3715b589-30b5-48c3-ae35-bbb103548e98", "force": true, "format": "json"}]: dispatch Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3715b589-30b5-48c3-ae35-bbb103548e98, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3715b589-30b5-48c3-ae35-bbb103548e98, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta.tmp' Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta.tmp' to config b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta' Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "format": "json"}]: dispatch Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:36 localhost systemd[1]: var-lib-containers-storage-overlay-8feab9915cd549e508973929d606dec6edf6c2c66c5c9a9b78239bf5e744001b-merged.mount: Deactivated successfully. Nov 28 05:07:36 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-78986f9e71a77301eadcca6678ea6748e2fa00cb1553a0dcc251102da5ea6e4b-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:36 localhost podman[320282]: Nov 28 05:07:36 localhost podman[320282]: 2025-11-28 10:07:36.788962729 +0000 UTC m=+0.071700125 container create 19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:07:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:36 localhost systemd[1]: Started libpod-conmon-19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d.scope. Nov 28 05:07:36 localhost systemd[1]: Started libcrun container. Nov 28 05:07:36 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37e1f2d784bb480b9928814a810411a02129e006e80f1b36cac11a8afe4824eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:36 localhost podman[320282]: 2025-11-28 10:07:36.749505781 +0000 UTC m=+0.032243187 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:36 localhost podman[320282]: 2025-11-28 10:07:36.850199209 +0000 UTC m=+0.132936575 container init 19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:36 localhost podman[320282]: 2025-11-28 10:07:36.857739021 +0000 UTC m=+0.140476387 container start 19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:36 localhost dnsmasq[320301]: started, version 2.85 cachesize 150 Nov 28 05:07:36 localhost dnsmasq[320301]: DNS service limited to local subnets Nov 28 05:07:36 localhost dnsmasq[320301]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:36 localhost dnsmasq[320301]: warning: no upstream servers configured Nov 28 05:07:36 localhost dnsmasq-dhcp[320301]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 28 05:07:36 localhost dnsmasq[320301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 0 addresses Nov 28 05:07:36 localhost dnsmasq-dhcp[320301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:36 localhost dnsmasq-dhcp[320301]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:37.098 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3b:02 10.100.0.19 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9fa2ef-906d-4c5f-8a61-e350b84d90cf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=636c21fa-d6bd-405e-95ed-d59498827d6f) old=Port_Binding(mac=['fa:16:3e:af:3b:02 10.100.0.19 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:37.101 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 636c21fa-d6bd-405e-95ed-d59498827d6f in datapath 744b5a82-3c5c-4b41-ba44-527244a209c4 updated#033[00m Nov 28 05:07:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:37.104 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port a03547c5-d094-4489-b8d5-6b024d72dbea IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:07:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:37.105 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 744b5a82-3c5c-4b41-ba44-527244a209c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:37.106 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[646ff614-23b5-4a1f-a91c-fc245f4e3161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:37 localhost dnsmasq[320301]: exiting on receipt of SIGTERM Nov 28 05:07:37 localhost podman[320319]: 2025-11-28 10:07:37.381384406 +0000 UTC m=+0.065350568 container kill 19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:37 localhost systemd[1]: libpod-19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d.scope: Deactivated successfully. Nov 28 05:07:37 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:37.401 261346 INFO neutron.agent.dhcp.agent [None req-5962ec3e-3f95-4335-8134-4b9f103284ff - - - - - -] DHCP configuration for ports {'530dc798-1c6a-4c38-a11f-57f3818e5561', 'd42f5d61-afe1-455f-b448-86993094b244', '636c21fa-d6bd-405e-95ed-d59498827d6f'} is completed#033[00m Nov 28 05:07:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:07:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:07:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:07:37 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:07:37 localhost podman[320335]: 2025-11-28 10:07:37.488898596 +0000 UTC m=+0.081371933 container died 19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:37 localhost podman[320335]: 2025-11-28 10:07:37.523762972 +0000 UTC m=+0.116236329 container remove 19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:07:37 localhost podman[320359]: 2025-11-28 10:07:37.581121623 +0000 UTC m=+0.145961758 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:07:37 localhost podman[320359]: 2025-11-28 10:07:37.597406145 +0000 UTC m=+0.162246280 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:07:37 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:07:37 localhost podman[320348]: 2025-11-28 10:07:37.613317266 +0000 UTC m=+0.189848172 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:37 localhost podman[320346]: 2025-11-28 10:07:37.566053007 +0000 UTC m=+0.147769373 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:37 localhost systemd[1]: libpod-conmon-19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d.scope: Deactivated successfully. Nov 28 05:07:37 localhost podman[320351]: 2025-11-28 10:07:37.674175235 +0000 UTC m=+0.244178518 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:07:37 localhost podman[320346]: 2025-11-28 10:07:37.699115815 +0000 UTC m=+0.280832181 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:07:37 localhost systemd[1]: var-lib-containers-storage-overlay-37e1f2d784bb480b9928814a810411a02129e006e80f1b36cac11a8afe4824eb-merged.mount: Deactivated successfully. Nov 28 05:07:37 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-19060c1d1f4ed3cdc711b6e2837e2c7d9b19bdf44425397fde8af14a5c542a9d-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:37 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:07:37 localhost podman[320348]: 2025-11-28 10:07:37.728186322 +0000 UTC m=+0.304717298 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:37 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:07:37 localhost podman[320351]: 2025-11-28 10:07:37.754310178 +0000 UTC m=+0.324313491 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:37 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:07:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 225 MiB data, 1023 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 141 KiB/s wr, 42 op/s Nov 28 05:07:38 localhost nova_compute[280168]: 2025-11-28 10:07:38.143 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e198 e198: 6 total, 6 up, 6 in Nov 28 05:07:38 localhost podman[320493]: Nov 28 05:07:38 localhost podman[320493]: 2025-11-28 10:07:38.911363696 +0000 UTC m=+0.082885939 container create cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 05:07:38 localhost systemd[1]: Started libpod-conmon-cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b.scope. Nov 28 05:07:38 localhost systemd[1]: Started libcrun container. Nov 28 05:07:38 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0012342cf8bd08ca02fc301bcea34a05720aed503942f357e7119093cd5e2b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:38 localhost podman[320493]: 2025-11-28 10:07:38.968600114 +0000 UTC m=+0.140122357 container init cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:07:38 localhost podman[320493]: 2025-11-28 10:07:38.873764336 +0000 UTC m=+0.045286629 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:38 localhost podman[320493]: 2025-11-28 10:07:38.976782286 +0000 UTC m=+0.148304529 container start cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2) Nov 28 05:07:38 localhost dnsmasq[320512]: started, version 2.85 cachesize 150 Nov 28 05:07:38 localhost dnsmasq[320512]: DNS service limited to local subnets Nov 28 05:07:38 localhost dnsmasq[320512]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:38 localhost dnsmasq[320512]: warning: no upstream servers configured Nov 28 05:07:38 localhost dnsmasq-dhcp[320512]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:07:38 localhost dnsmasq-dhcp[320512]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 28 05:07:38 localhost dnsmasq[320512]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 0 addresses Nov 28 05:07:38 localhost dnsmasq-dhcp[320512]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:38 localhost dnsmasq-dhcp[320512]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:39.359 261346 INFO neutron.agent.dhcp.agent [None req-964425bd-3569-4dc7-a241-0bf1601b405c - - - - - -] DHCP configuration for ports {'530dc798-1c6a-4c38-a11f-57f3818e5561', 'd42f5d61-afe1-455f-b448-86993094b244', '636c21fa-d6bd-405e-95ed-d59498827d6f'} is completed#033[00m Nov 28 05:07:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "snap_name": "a7f380a2-0f60-4351-b764-bd78832f244d", "format": "json"}]: dispatch Nov 28 05:07:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a7f380a2-0f60-4351-b764-bd78832f244d, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a7f380a2-0f60-4351-b764-bd78832f244d, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:39 localhost nova_compute[280168]: 2025-11-28 10:07:39.468 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e199 e199: 6 total, 6 up, 6 in Nov 28 05:07:39 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:39.733 2 INFO neutron.agent.securitygroups_rpc [None req-c0e4f748-a4dd-449c-b793-430f30c9256f 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['58a0f932-b9f0-4bad-a5cb-a3c8cba7c65b']#033[00m Nov 28 05:07:39 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:39.772 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:39Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ef709d14-bdfb-4122-9587-c257ef31d183, ip_allocation=immediate, mac_address=fa:16:3e:6b:0f:d4, name=tempest-PortsTestJSON-1427451617, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T10:05:56Z, description=, dns_domain=, id=744b5a82-3c5c-4b41-ba44-527244a209c4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-test-network-935184943, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32831, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=2138, status=ACTIVE, subnets=['ce735290-31b9-4af4-844a-71b2fcf68031', 'd7466e39-52ac-4561-ade5-b21553db79fb'], tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:35Z, vlan_transparent=None, network_id=744b5a82-3c5c-4b41-ba44-527244a209c4, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['58a0f932-b9f0-4bad-a5cb-a3c8cba7c65b'], standard_attr_id=2574, status=DOWN, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:39Z on network 744b5a82-3c5c-4b41-ba44-527244a209c4#033[00m Nov 28 05:07:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v397: 177 pgs: 177 active+clean; 225 MiB data, 1023 MiB used, 41 GiB / 42 GiB avail; 1.6 KiB/s rd, 26 KiB/s wr, 4 op/s Nov 28 05:07:39 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:07:39 localhost podman[320513]: 2025-11-28 10:07:39.983202134 +0000 UTC m=+0.086988476 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:07:40 localhost podman[320513]: 2025-11-28 10:07:40.017678108 +0000 UTC m=+0.121464390 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:07:40 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:07:40 localhost dnsmasq[320512]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 1 addresses Nov 28 05:07:40 localhost dnsmasq-dhcp[320512]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:40 localhost dnsmasq-dhcp[320512]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:40 localhost podman[320552]: 2025-11-28 10:07:40.15021633 +0000 UTC m=+0.062198172 container kill cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:07:40 localhost systemd[1]: tmp-crun.m41PEJ.mount: Deactivated successfully. Nov 28 05:07:40 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:40.493 261346 INFO neutron.agent.dhcp.agent [None req-f0fdf4cc-1f8a-4bf6-a800-da6a955dab3d - - - - - -] DHCP configuration for ports {'ef709d14-bdfb-4122-9587-c257ef31d183'} is completed#033[00m Nov 28 05:07:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e200 e200: 6 total, 6 up, 6 in Nov 28 05:07:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "1ad625c7-a9e5-4671-b5b8-87ed0dc833b6", "format": "json"}]: dispatch Nov 28 05:07:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1ad625c7-a9e5-4671-b5b8-87ed0dc833b6, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1ad625c7-a9e5-4671-b5b8-87ed0dc833b6, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e201 e201: 6 total, 6 up, 6 in Nov 28 05:07:41 localhost dnsmasq[320512]: exiting on receipt of SIGTERM Nov 28 05:07:41 localhost podman[320590]: 2025-11-28 10:07:41.561456024 +0000 UTC m=+0.065296126 container kill cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:07:41 localhost systemd[1]: libpod-cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b.scope: Deactivated successfully. Nov 28 05:07:41 localhost podman[320605]: 2025-11-28 10:07:41.636100579 +0000 UTC m=+0.059925381 container died cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:07:41 localhost systemd[1]: tmp-crun.VVRtLF.mount: Deactivated successfully. Nov 28 05:07:41 localhost podman[320605]: 2025-11-28 10:07:41.668454528 +0000 UTC m=+0.092279260 container cleanup cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:07:41 localhost systemd[1]: libpod-conmon-cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b.scope: Deactivated successfully. Nov 28 05:07:41 localhost podman[320606]: 2025-11-28 10:07:41.711504976 +0000 UTC m=+0.128933771 container remove cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Nov 28 05:07:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v400: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 35 KiB/s wr, 184 op/s Nov 28 05:07:42 localhost nova_compute[280168]: 2025-11-28 10:07:42.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:42 localhost systemd[1]: var-lib-containers-storage-overlay-c0012342cf8bd08ca02fc301bcea34a05720aed503942f357e7119093cd5e2b7-merged.mount: Deactivated successfully. Nov 28 05:07:42 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb475ecb689927ab91959b49b410b28fc5f8205ec91836dd9cfa49a3750f631b-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e202 e202: 6 total, 6 up, 6 in Nov 28 05:07:43 localhost nova_compute[280168]: 2025-11-28 10:07:43.156 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:43.490 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:3b:02 10.100.0.19 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9fa2ef-906d-4c5f-8a61-e350b84d90cf, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=636c21fa-d6bd-405e-95ed-d59498827d6f) old=Port_Binding(mac=['fa:16:3e:af:3b:02 10.100.0.19 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.19/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:43.492 158530 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 636c21fa-d6bd-405e-95ed-d59498827d6f in datapath 744b5a82-3c5c-4b41-ba44-527244a209c4 updated#033[00m Nov 28 05:07:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:43.495 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Port a03547c5-d094-4489-b8d5-6b024d72dbea IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536#033[00m Nov 28 05:07:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:43.495 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 744b5a82-3c5c-4b41-ba44-527244a209c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:43 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:43.496 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[9d322294-e6d7-4255-8886-c5369ddec8fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v402: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 32 KiB/s wr, 170 op/s Nov 28 05:07:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "snap_name": "a7f380a2-0f60-4351-b764-bd78832f244d_c6ebf898-e639-41ae-92f7-1c129b5ee100", "force": true, "format": "json"}]: dispatch Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a7f380a2-0f60-4351-b764-bd78832f244d_c6ebf898-e639-41ae-92f7-1c129b5ee100, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta.tmp' Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta.tmp' to config b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta' Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a7f380a2-0f60-4351-b764-bd78832f244d_c6ebf898-e639-41ae-92f7-1c129b5ee100, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "snap_name": "a7f380a2-0f60-4351-b764-bd78832f244d", "force": true, "format": "json"}]: dispatch Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a7f380a2-0f60-4351-b764-bd78832f244d, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta.tmp' Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta.tmp' to config b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806/.meta' Nov 28 05:07:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a7f380a2-0f60-4351-b764-bd78832f244d, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:43 localhost podman[320648]: 2025-11-28 10:07:43.983787411 +0000 UTC m=+0.092128905 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 05:07:44 localhost podman[320648]: 2025-11-28 10:07:44.025408746 +0000 UTC m=+0.133750250 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:07:44 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:07:44 localhost nova_compute[280168]: 2025-11-28 10:07:44.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:44.334 2 INFO neutron.agent.securitygroups_rpc [None req-cecabd37-7803-4c2d-a13a-d3905bbc0cfc 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['18fcd73e-4837-425f-bf44-9ed4ac2aa187', '58a0f932-b9f0-4bad-a5cb-a3c8cba7c65b', 'bec6547e-445f-4500-b371-6e2fc240d4db']#033[00m Nov 28 05:07:44 localhost nova_compute[280168]: 2025-11-28 10:07:44.507 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e203 e203: 6 total, 6 up, 6 in Nov 28 05:07:44 localhost podman[320701]: Nov 28 05:07:44 localhost podman[320701]: 2025-11-28 10:07:44.645184658 +0000 UTC m=+0.098128080 container create ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 05:07:44 localhost systemd[1]: Started libpod-conmon-ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d.scope. Nov 28 05:07:44 localhost systemd[1]: Started libcrun container. Nov 28 05:07:44 localhost podman[320701]: 2025-11-28 10:07:44.59534079 +0000 UTC m=+0.048284232 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:44 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb36c4db1eaf4ba887ed8bd7f5126f0d10ffb93de60f53a93db5d592a36f5f2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:44 localhost podman[320701]: 2025-11-28 10:07:44.705005136 +0000 UTC m=+0.157948548 container init ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:07:44 localhost podman[320701]: 2025-11-28 10:07:44.713321002 +0000 UTC m=+0.166264414 container start ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2) Nov 28 05:07:44 localhost dnsmasq[320719]: started, version 2.85 cachesize 150 Nov 28 05:07:44 localhost dnsmasq[320719]: DNS service limited to local subnets Nov 28 05:07:44 localhost dnsmasq[320719]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:44 localhost dnsmasq[320719]: warning: no upstream servers configured Nov 28 05:07:44 localhost dnsmasq-dhcp[320719]: DHCP, static leases only on 10.100.0.0, lease time 1d Nov 28 05:07:44 localhost dnsmasq-dhcp[320719]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 28 05:07:44 localhost dnsmasq-dhcp[320719]: DHCP, static leases only on 10.100.0.32, lease time 1d Nov 28 05:07:44 localhost dnsmasq[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 1 addresses Nov 28 05:07:44 localhost dnsmasq-dhcp[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:44 localhost dnsmasq-dhcp[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:44 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:44.748 2 INFO neutron.agent.securitygroups_rpc [None req-d99eee51-2169-45db-889c-fcddfc1e6db2 6b80a7d6f65f4ebe8363ccaae26f3e87 ae10569a38284f298c961498da620c5f - - default default] Security group member updated ['c5eee24b-0bed-4035-a2ab-e6c531c94e43']#033[00m Nov 28 05:07:44 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:44.787 261346 INFO neutron.agent.dhcp.agent [None req-44e1ce0c-38da-4b03-a4d9-0628bced247c - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:07:39Z, description=, device_id=, device_owner=, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=ef709d14-bdfb-4122-9587-c257ef31d183, ip_allocation=immediate, mac_address=fa:16:3e:6b:0f:d4, name=tempest-PortsTestJSON-1243252809, network_id=744b5a82-3c5c-4b41-ba44-527244a209c4, port_security_enabled=True, project_id=5e7a07c97c664076bc825e05137c574c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['18fcd73e-4837-425f-bf44-9ed4ac2aa187', 'bec6547e-445f-4500-b371-6e2fc240d4db'], standard_attr_id=2574, status=DOWN, tags=[], tenant_id=5e7a07c97c664076bc825e05137c574c, updated_at=2025-11-28T10:07:44Z on network 744b5a82-3c5c-4b41-ba44-527244a209c4#033[00m Nov 28 05:07:44 localhost dnsmasq-dhcp[320719]: DHCPRELEASE(tapd42f5d61-af) 10.100.0.10 fa:16:3e:6b:0f:d4 Nov 28 05:07:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:45.595 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e204 e204: 6 total, 6 up, 6 in Nov 28 05:07:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 120 KiB/s rd, 32 KiB/s wr, 172 op/s Nov 28 05:07:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "2c0f5770-a785-4c87-af5d-f92c8abce21d", "format": "json"}]: dispatch Nov 28 05:07:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c0f5770-a785-4c87-af5d-f92c8abce21d, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2c0f5770-a785-4c87-af5d-f92c8abce21d, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:45.980 261346 INFO neutron.agent.dhcp.agent [None req-ffa45ac7-8367-42e7-83fc-786aae64ad4b - - - - - -] DHCP configuration for ports {'530dc798-1c6a-4c38-a11f-57f3818e5561', 'd42f5d61-afe1-455f-b448-86993094b244', 'ef709d14-bdfb-4122-9587-c257ef31d183', '636c21fa-d6bd-405e-95ed-d59498827d6f'} is completed#033[00m Nov 28 05:07:46 localhost systemd[1]: tmp-crun.64rDsN.mount: Deactivated successfully. Nov 28 05:07:46 localhost dnsmasq[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 1 addresses Nov 28 05:07:46 localhost podman[320738]: 2025-11-28 10:07:46.073016566 +0000 UTC m=+0.072644324 container kill ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:46 localhost dnsmasq-dhcp[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:46 localhost dnsmasq-dhcp[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:46.126 2 INFO neutron.agent.securitygroups_rpc [None req-ff6bd390-74ec-4285-8021-b1b301c7b944 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['18fcd73e-4837-425f-bf44-9ed4ac2aa187', 'bec6547e-445f-4500-b371-6e2fc240d4db']#033[00m Nov 28 05:07:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:46.209 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:46 localhost nova_compute[280168]: 2025-11-28 10:07:46.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:46 localhost nova_compute[280168]: 2025-11-28 10:07:46.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:46.343 261346 INFO neutron.agent.dhcp.agent [None req-011dc99e-6b5b-4adf-8995-9c08519a6a94 - - - - - -] DHCP configuration for ports {'ef709d14-bdfb-4122-9587-c257ef31d183'} is completed#033[00m Nov 28 05:07:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e205 e205: 6 total, 6 up, 6 in Nov 28 05:07:46 localhost dnsmasq[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 0 addresses Nov 28 05:07:46 localhost dnsmasq-dhcp[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:46 localhost dnsmasq-dhcp[320719]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:46 localhost podman[320776]: 2025-11-28 10:07:46.556131449 +0000 UTC m=+0.049811568 container kill ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0) Nov 28 05:07:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:46 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:46.878 2 INFO neutron.agent.securitygroups_rpc [None req-02f5cc80-e4fe-45b5-9f60-d631f95eb2c9 6b80a7d6f65f4ebe8363ccaae26f3e87 ae10569a38284f298c961498da620c5f - - default default] Security group member updated ['c5eee24b-0bed-4035-a2ab-e6c531c94e43']#033[00m Nov 28 05:07:47 localhost nova_compute[280168]: 2025-11-28 10:07:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:47 localhost nova_compute[280168]: 2025-11-28 10:07:47.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:07:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "format": "json"}]: dispatch Nov 28 05:07:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:07:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:07:47.545+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '80ccc338-1d8e-4716-ba2a-1f18e7a6e806' of type subvolume Nov 28 05:07:47 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '80ccc338-1d8e-4716-ba2a-1f18e7a6e806' of type subvolume Nov 28 05:07:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "80ccc338-1d8e-4716-ba2a-1f18e7a6e806", "force": true, "format": "json"}]: dispatch Nov 28 05:07:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:47 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/80ccc338-1d8e-4716-ba2a-1f18e7a6e806'' moved to trashcan Nov 28 05:07:47 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:07:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:80ccc338-1d8e-4716-ba2a-1f18e7a6e806, vol_name:cephfs) < "" Nov 28 05:07:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e206 e206: 6 total, 6 up, 6 in Nov 28 05:07:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 142 KiB/s rd, 30 KiB/s wr, 198 op/s Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.200 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.341 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.342 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.377 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:48.378 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:48.379 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.398 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.399 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.399 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.399 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.400 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:07:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:07:48 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1383712359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:07:48 localhost nova_compute[280168]: 2025-11-28 10:07:48.848 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.075 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.078 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11506MB free_disk=41.70003890991211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.078 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.079 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.543 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e207 e207: 6 total, 6 up, 6 in Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.786 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.786 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:07:49 localhost nova_compute[280168]: 2025-11-28 10:07:49.808 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:07:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 135 KiB/s rd, 29 KiB/s wr, 188 op/s Nov 28 05:07:49 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "46e657b3-6b45-48d1-8f94-979e6670605a", "format": "json"}]: dispatch Nov 28 05:07:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:46e657b3-6b45-48d1-8f94-979e6670605a, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:46e657b3-6b45-48d1-8f94-979e6670605a, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:50 localhost dnsmasq[320719]: exiting on receipt of SIGTERM Nov 28 05:07:50 localhost systemd[1]: libpod-ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d.scope: Deactivated successfully. Nov 28 05:07:50 localhost podman[320857]: 2025-11-28 10:07:50.084507859 +0000 UTC m=+0.060270532 container kill ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:07:50 localhost podman[320870]: 2025-11-28 10:07:50.129952902 +0000 UTC m=+0.031688239 container died ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true) Nov 28 05:07:50 localhost podman[320870]: 2025-11-28 10:07:50.158767421 +0000 UTC m=+0.060502768 container cleanup ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125) Nov 28 05:07:50 localhost systemd[1]: libpod-conmon-ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d.scope: Deactivated successfully. Nov 28 05:07:50 localhost podman[320877]: 2025-11-28 10:07:50.197082344 +0000 UTC m=+0.087260984 container remove ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 05:07:50 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:07:50 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/674541035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:07:50 localhost nova_compute[280168]: 2025-11-28 10:07:50.257 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:07:50 localhost nova_compute[280168]: 2025-11-28 10:07:50.263 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:07:50 localhost nova_compute[280168]: 2025-11-28 10:07:50.396 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:07:50 localhost nova_compute[280168]: 2025-11-28 10:07:50.399 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:07:50 localhost nova_compute[280168]: 2025-11-28 10:07:50.399 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:07:50 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:07:50 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3345109558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:07:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:50.851 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:07:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:50.852 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:07:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:50.852 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:07:51 localhost systemd[1]: var-lib-containers-storage-overlay-cbb36c4db1eaf4ba887ed8bd7f5126f0d10ffb93de60f53a93db5d592a36f5f2-merged.mount: Deactivated successfully. Nov 28 05:07:51 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ade539c5587c5a7549a112e7f1f7660e83ca9e8302a21cb1eea33af2732f279d-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:51 localhost nova_compute[280168]: 2025-11-28 10:07:51.297 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:51 localhost nova_compute[280168]: 2025-11-28 10:07:51.298 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:51 localhost nova_compute[280168]: 2025-11-28 10:07:51.298 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:07:51 localhost podman[320948]: Nov 28 05:07:51 localhost podman[320948]: 2025-11-28 10:07:51.407409807 +0000 UTC m=+0.087524053 container create 25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:07:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e208 e208: 6 total, 6 up, 6 in Nov 28 05:07:51 localhost systemd[1]: Started libpod-conmon-25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892.scope. Nov 28 05:07:51 localhost podman[320948]: 2025-11-28 10:07:51.365033959 +0000 UTC m=+0.045148255 image pull quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified Nov 28 05:07:51 localhost systemd[1]: Started libcrun container. Nov 28 05:07:51 localhost kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2034494d726fb92689f42123b6bec02a92e71b6d85a0adb34bb7e76ef306caf4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff) Nov 28 05:07:51 localhost podman[320948]: 2025-11-28 10:07:51.503972028 +0000 UTC m=+0.184086274 container init 25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125) Nov 28 05:07:51 localhost podman[320948]: 2025-11-28 10:07:51.511038426 +0000 UTC m=+0.191152672 container start 25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 05:07:51 localhost dnsmasq[320966]: started, version 2.85 cachesize 150 Nov 28 05:07:51 localhost dnsmasq[320966]: DNS service limited to local subnets Nov 28 05:07:51 localhost dnsmasq[320966]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile Nov 28 05:07:51 localhost dnsmasq[320966]: warning: no upstream servers configured Nov 28 05:07:51 localhost dnsmasq-dhcp[320966]: DHCP, static leases only on 10.100.0.16, lease time 1d Nov 28 05:07:51 localhost dnsmasq-dhcp[320966]: DHCP, static leases only on 10.100.0.32, lease time 1d Nov 28 05:07:51 localhost dnsmasq[320966]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/addn_hosts - 0 addresses Nov 28 05:07:51 localhost dnsmasq-dhcp[320966]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/host Nov 28 05:07:51 localhost dnsmasq-dhcp[320966]: read /var/lib/neutron/dhcp/744b5a82-3c5c-4b41-ba44-527244a209c4/opts Nov 28 05:07:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v412: 177 pgs: 177 active+clean; 225 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 185 KiB/s rd, 41 KiB/s wr, 254 op/s Nov 28 05:07:51 localhost dnsmasq[320966]: exiting on receipt of SIGTERM Nov 28 05:07:51 localhost podman[320984]: 2025-11-28 10:07:51.888335133 +0000 UTC m=+0.065044339 container kill 25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:07:51 localhost systemd[1]: libpod-25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892.scope: Deactivated successfully. Nov 28 05:07:51 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:51.951 261346 INFO neutron.agent.dhcp.agent [None req-25e55253-a679-4a11-8582-8826d1bd0a82 - - - - - -] DHCP configuration for ports {'530dc798-1c6a-4c38-a11f-57f3818e5561', 'd42f5d61-afe1-455f-b448-86993094b244', '636c21fa-d6bd-405e-95ed-d59498827d6f'} is completed#033[00m Nov 28 05:07:51 localhost podman[320999]: 2025-11-28 10:07:51.965733242 +0000 UTC m=+0.055952669 container died 25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3) Nov 28 05:07:52 localhost podman[320999]: 2025-11-28 10:07:52.006289744 +0000 UTC m=+0.096509131 container remove 25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-744b5a82-3c5c-4b41-ba44-527244a209c4, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:07:52 localhost systemd[1]: libpod-conmon-25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892.scope: Deactivated successfully. Nov 28 05:07:52 localhost nova_compute[280168]: 2025-11-28 10:07:52.060 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:52 localhost kernel: device tapd42f5d61-af left promiscuous mode Nov 28 05:07:52 localhost ovn_controller[152726]: 2025-11-28T10:07:52Z|00189|binding|INFO|Releasing lport d42f5d61-afe1-455f-b448-86993094b244 from this chassis (sb_readonly=0) Nov 28 05:07:52 localhost ovn_controller[152726]: 2025-11-28T10:07:52Z|00190|binding|INFO|Setting lport d42f5d61-afe1-455f-b448-86993094b244 down in Southbound Nov 28 05:07:52 localhost systemd[1]: var-lib-containers-storage-overlay-2034494d726fb92689f42123b6bec02a92e71b6d85a0adb34bb7e76ef306caf4-merged.mount: Deactivated successfully. Nov 28 05:07:52 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-25d273de97282d954a575896c1c9bb0181ff4d364d50bff62776b0a3d7e26892-userdata-shm.mount: Deactivated successfully. Nov 28 05:07:52 localhost nova_compute[280168]: 2025-11-28 10:07:52.087 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:52.094 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28 10.100.0.35/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-744b5a82-3c5c-4b41-ba44-527244a209c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e7a07c97c664076bc825e05137c574c', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=af9fa2ef-906d-4c5f-8a61-e350b84d90cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=d42f5d61-afe1-455f-b448-86993094b244) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:07:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:52.096 158530 INFO neutron.agent.ovn.metadata.agent [-] Port d42f5d61-afe1-455f-b448-86993094b244 in datapath 744b5a82-3c5c-4b41-ba44-527244a209c4 unbound from our chassis#033[00m Nov 28 05:07:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:52.098 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 744b5a82-3c5c-4b41-ba44-527244a209c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:07:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:52.099 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[db4d9133-6660-43ee-916f-c951d5dce369]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:07:52 localhost ovn_metadata_agent[158525]: 2025-11-28 10:07:52.380 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:07:52 localhost systemd[1]: run-netns-qdhcp\x2d744b5a82\x2d3c5c\x2d4b41\x2dba44\x2d527244a209c4.mount: Deactivated successfully. Nov 28 05:07:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:52.411 261346 INFO neutron.agent.dhcp.agent [None req-93d85043-a83d-4963-a680-6331aca86742 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:52 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e209 e209: 6 total, 6 up, 6 in Nov 28 05:07:52 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:52.459 2 INFO neutron.agent.securitygroups_rpc [None req-6326bf78-d29d-46c0-b3b7-72df824a50bd 646af4638a054ec4b7eb9e5438a2ab60 5e7a07c97c664076bc825e05137c574c - - default default] Security group member updated ['11213b09-f0e7-4cd0-8dcb-72dc58a4cd0b']#033[00m Nov 28 05:07:52 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:52.491 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:53 localhost nova_compute[280168]: 2025-11-28 10:07:53.203 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:53 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:07:53.485 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:07:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v414: 177 pgs: 177 active+clean; 225 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 17 KiB/s wr, 96 op/s Nov 28 05:07:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e210 e210: 6 total, 6 up, 6 in Nov 28 05:07:53 localhost nova_compute[280168]: 2025-11-28 10:07:53.952 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "a5860537-4c07-4bc9-9db5-8ed95b4a5b7c", "format": "json"}]: dispatch Nov 28 05:07:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a5860537-4c07-4bc9-9db5-8ed95b4a5b7c, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a5860537-4c07-4bc9-9db5-8ed95b4a5b7c, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:54 localhost nova_compute[280168]: 2025-11-28 10:07:54.546 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:54 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e211 e211: 6 total, 6 up, 6 in Nov 28 05:07:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v417: 177 pgs: 177 active+clean; 225 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 23 KiB/s wr, 131 op/s Nov 28 05:07:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e212 e212: 6 total, 6 up, 6 in Nov 28 05:07:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:07:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:07:56 localhost podman[321025]: 2025-11-28 10:07:56.975502383 +0000 UTC m=+0.081512217 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter) Nov 28 05:07:56 localhost podman[321025]: 2025-11-28 10:07:56.991567838 +0000 UTC m=+0.097577632 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.) Nov 28 05:07:57 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:07:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "3e64b882-8f97-4fa5-8cd3-e92a05aac6b2", "format": "json"}]: dispatch Nov 28 05:07:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3e64b882-8f97-4fa5-8cd3-e92a05aac6b2, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3e64b882-8f97-4fa5-8cd3-e92a05aac6b2, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:07:57 localhost openstack_network_exporter[240973]: ERROR 10:07:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:07:57 localhost openstack_network_exporter[240973]: ERROR 10:07:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:07:57 localhost openstack_network_exporter[240973]: ERROR 10:07:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:07:57 localhost openstack_network_exporter[240973]: ERROR 10:07:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:07:57 localhost openstack_network_exporter[240973]: Nov 28 05:07:57 localhost openstack_network_exporter[240973]: ERROR 10:07:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:07:57 localhost openstack_network_exporter[240973]: Nov 28 05:07:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v419: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 11 KiB/s wr, 73 op/s Nov 28 05:07:58 localhost nova_compute[280168]: 2025-11-28 10:07:58.206 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:58 localhost podman[239012]: time="2025-11-28T10:07:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:07:58 localhost podman[239012]: @ - - [28/Nov/2025:10:07:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158154 "" "Go-http-client/1.1" Nov 28 05:07:58 localhost podman[239012]: @ - - [28/Nov/2025:10:07:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19685 "" "Go-http-client/1.1" Nov 28 05:07:59 localhost nova_compute[280168]: 2025-11-28 10:07:59.548 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:07:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 225 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 10 KiB/s wr, 66 op/s Nov 28 05:07:59 localhost neutron_sriov_agent[254415]: 2025-11-28 10:07:59.980 2 INFO neutron.agent.securitygroups_rpc [None req-db55947c-11ec-46cf-8b4d-c6d9fdfd5571 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['6089c18b-265b-455e-adb1-d3701c826867']#033[00m Nov 28 05:08:00 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:00.376 2 INFO neutron.agent.securitygroups_rpc [None req-c7648c00-3d9b-484d-a8fa-078b96af4727 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['6089c18b-265b-455e-adb1-d3701c826867']#033[00m Nov 28 05:08:00 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:00.430 2 INFO neutron.agent.securitygroups_rpc [req-b7f1a51c-049a-4a1b-9cd5-f5d4f2c3744c req-dd3110e0-51f5-4337-ac22-6a0998bcc00c 87cf17c48bab44279b2e7eca9e0882a2 d3c0d1ce8d854a7b9ffc953e88cd2c44 - - default default] Security group member updated ['c52603b5-5f47-4123-b8fe-cc9f0a56d914']#033[00m Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:08:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:08:00 localhost podman[321065]: 2025-11-28 10:08:00.689774931 +0000 UTC m=+0.052672287 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:08:00 localhost systemd[1]: tmp-crun.gspECf.mount: Deactivated successfully. Nov 28 05:08:00 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 1 addresses Nov 28 05:08:00 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:08:00 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:08:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "3e64b882-8f97-4fa5-8cd3-e92a05aac6b2_36a63ece-b219-4438-a9c8-be964316c461", "force": true, "format": "json"}]: dispatch Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3e64b882-8f97-4fa5-8cd3-e92a05aac6b2_36a63ece-b219-4438-a9c8-be964316c461, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3e64b882-8f97-4fa5-8cd3-e92a05aac6b2_36a63ece-b219-4438-a9c8-be964316c461, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "3e64b882-8f97-4fa5-8cd3-e92a05aac6b2", "force": true, "format": "json"}]: dispatch Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3e64b882-8f97-4fa5-8cd3-e92a05aac6b2, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3e64b882-8f97-4fa5-8cd3-e92a05aac6b2, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:01.210 2 INFO neutron.agent.securitygroups_rpc [None req-565e0ad3-78a4-4813-9446-c55fe79fb3b2 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['d7b997f9-4b8e-48df-a7bc-cf1a88435b19']#033[00m Nov 28 05:08:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:01.339 2 INFO neutron.agent.securitygroups_rpc [None req-51f1f60e-f6ad-4b91-b7b5-2db567d0c6ed 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:01.402 2 INFO neutron.agent.securitygroups_rpc [None req-84e0009b-c65d-4325-9480-9689cb3fdfb2 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['d7b997f9-4b8e-48df-a7bc-cf1a88435b19']#033[00m Nov 28 05:08:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e213 e213: 6 total, 6 up, 6 in Nov 28 05:08:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:01.557 2 INFO neutron.agent.securitygroups_rpc [None req-f01fa5c9-de49-46e8-bc93-b89c3fed3f57 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:01.779 2 INFO neutron.agent.securitygroups_rpc [None req-05d81c06-d401-4205-8dff-14a86090a368 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 146 MiB data, 899 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 29 KiB/s wr, 107 op/s Nov 28 05:08:01 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:01.944 2 INFO neutron.agent.securitygroups_rpc [None req-22968046-714b-4dc6-9b18-f55692eae2e8 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:02.115 2 INFO neutron.agent.securitygroups_rpc [None req-1c9f5803-20cb-4e69-804a-0617a068dfc7 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:02.258 2 INFO neutron.agent.securitygroups_rpc [None req-2bab2459-ad99-4514-94d3-affbaffcf884 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:02.788 2 INFO neutron.agent.securitygroups_rpc [None req-c50c53a2-fd90-43a9-8f31-8eb2d8fc3f23 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:02 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:02.989 2 INFO neutron.agent.securitygroups_rpc [None req-be6115b0-0729-4a7d-96a1-2edd3462b47c 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.011 2 INFO neutron.agent.securitygroups_rpc [None req-bd38caac-e12b-4351-8e6f-97a983a9711e 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.184 2 INFO neutron.agent.securitygroups_rpc [None req-edb67cac-fd5d-4927-8f94-9c49e0c903d7 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:03 localhost nova_compute[280168]: 2025-11-28 10:08:03.208 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.338 2 INFO neutron.agent.securitygroups_rpc [None req-20312780-c455-466a-b7ab-f675d7eabde1 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.429 2 INFO neutron.agent.securitygroups_rpc [None req-14a5f88b-dc6d-4539-ba50-a6898f3db0a1 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.482 2 INFO neutron.agent.securitygroups_rpc [None req-13772359-a1cd-4764-8f15-d95cc5fef900 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 146 MiB data, 899 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 25 KiB/s wr, 93 op/s Nov 28 05:08:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:03 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1986905182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.875 2 INFO neutron.agent.securitygroups_rpc [None req-6ebeaf50-19b9-453a-b8c6-1888da8279d8 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:03 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1986905182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.894 2 INFO neutron.agent.securitygroups_rpc [None req-13ec4ccc-0950-4875-9a0e-1e0334716892 f56d2237e5b74576a33d9840c9346817 9ce143270a4649669232b53b6a44e4ba - - default default] Security group member updated ['f3e50b86-f5a6-4339-897f-e9e754c264f3']#033[00m Nov 28 05:08:03 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:03.898 2 INFO neutron.agent.securitygroups_rpc [None req-a9fad0b5-d55e-463a-8ffe-8066b04b35f2 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:04.239 2 INFO neutron.agent.securitygroups_rpc [None req-179f04c6-9ef2-42e7-8385-4b9a01b4f84f 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:04.280 2 INFO neutron.agent.securitygroups_rpc [None req-703ffdf6-c471-414a-b1d5-827cd1496408 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:04 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "a5860537-4c07-4bc9-9db5-8ed95b4a5b7c_b0090dae-0a99-4947-ac5e-d28013b7f627", "force": true, "format": "json"}]: dispatch Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a5860537-4c07-4bc9-9db5-8ed95b4a5b7c_b0090dae-0a99-4947-ac5e-d28013b7f627, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a5860537-4c07-4bc9-9db5-8ed95b4a5b7c_b0090dae-0a99-4947-ac5e-d28013b7f627, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:04 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "a5860537-4c07-4bc9-9db5-8ed95b4a5b7c", "force": true, "format": "json"}]: dispatch Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a5860537-4c07-4bc9-9db5-8ed95b4a5b7c, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a5860537-4c07-4bc9-9db5-8ed95b4a5b7c, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:04 localhost nova_compute[280168]: 2025-11-28 10:08:04.551 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:04.659 2 INFO neutron.agent.securitygroups_rpc [None req-0ac9408a-42d9-4a9d-8ec9-d45f37e64efb 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:04 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:04.683 2 INFO neutron.agent.securitygroups_rpc [None req-1da31546-dc0a-4e96-b9fc-5fecbaaa8ced 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['15f5d617-f52a-49d8-bbd0-19868b48ee91']#033[00m Nov 28 05:08:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e214 e214: 6 total, 6 up, 6 in Nov 28 05:08:05 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:05.187 2 INFO neutron.agent.securitygroups_rpc [None req-7ed5809e-b041-4ea1-b265-703b22d08b78 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['b758e87b-4de3-47dd-a896-167b766787f3']#033[00m Nov 28 05:08:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:08:05 Nov 28 05:08:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:08:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:08:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['backups', 'vms', 'manila_data', 'manila_metadata', 'images', '.mgr', 'volumes'] Nov 28 05:08:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:08:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:08:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:08:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:08:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:08:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:08:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:08:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 146 MiB data, 899 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 43 op/s Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.453674623115578e-06 of space, bias 1.0, pg target 0.0004899170330820771 quantized to 32 (current 32) Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:08:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.3348251675041876e-05 of space, bias 4.0, pg target 0.034505208333333336 quantized to 16 (current 16) Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:08:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:08:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e215 e215: 6 total, 6 up, 6 in Nov 28 05:08:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56764117-bba7-4a1d-bc16-3de8a089b757", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:08:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56764117-bba7-4a1d-bc16-3de8a089b757, vol_name:cephfs) < "" Nov 28 05:08:06 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:06.536 2 INFO neutron.agent.securitygroups_rpc [None req-c3c34998-5ba8-4c09-bfd4-af4362061351 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['77da4666-3c7e-4eb4-bd89-e0f6bc0cfb77']#033[00m Nov 28 05:08:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/56764117-bba7-4a1d-bc16-3de8a089b757/.meta.tmp' Nov 28 05:08:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/56764117-bba7-4a1d-bc16-3de8a089b757/.meta.tmp' to config b'/volumes/_nogroup/56764117-bba7-4a1d-bc16-3de8a089b757/.meta' Nov 28 05:08:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:56764117-bba7-4a1d-bc16-3de8a089b757, vol_name:cephfs) < "" Nov 28 05:08:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56764117-bba7-4a1d-bc16-3de8a089b757", "format": "json"}]: dispatch Nov 28 05:08:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56764117-bba7-4a1d-bc16-3de8a089b757, vol_name:cephfs) < "" Nov 28 05:08:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:56764117-bba7-4a1d-bc16-3de8a089b757, vol_name:cephfs) < "" Nov 28 05:08:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:06 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:06.838 2 INFO neutron.agent.securitygroups_rpc [None req-cf7500dc-e72d-4341-8bc2-ebaed58e7094 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['2521adb0-8644-4922-aaf5-9462c312df8d']#033[00m Nov 28 05:08:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e216 e216: 6 total, 6 up, 6 in Nov 28 05:08:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:07 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1480453872' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:07 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1480453872' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "46e657b3-6b45-48d1-8f94-979e6670605a_4c7473fe-0e78-460c-a4b8-b3878dfb39c2", "force": true, "format": "json"}]: dispatch Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:46e657b3-6b45-48d1-8f94-979e6670605a_4c7473fe-0e78-460c-a4b8-b3878dfb39c2, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:46e657b3-6b45-48d1-8f94-979e6670605a_4c7473fe-0e78-460c-a4b8-b3878dfb39c2, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "46e657b3-6b45-48d1-8f94-979e6670605a", "force": true, "format": "json"}]: dispatch Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:46e657b3-6b45-48d1-8f94-979e6670605a, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:46e657b3-6b45-48d1-8f94-979e6670605a, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:07 localhost ceph-mgr[286188]: [devicehealth INFO root] Check health Nov 28 05:08:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 170 active+clean; 146 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 30 KiB/s wr, 61 op/s Nov 28 05:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:08:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:08:07 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:07.949 2 INFO neutron.agent.securitygroups_rpc [None req-410f4b75-8a53-43bd-aeb3-73827c1fb9d5 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['cc6c7909-68f3-4243-ad85-ca295b324967']#033[00m Nov 28 05:08:07 localhost podman[321089]: 2025-11-28 10:08:07.976090788 +0000 UTC m=+0.081079274 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible) Nov 28 05:08:07 localhost systemd[1]: tmp-crun.cB8lIP.mount: Deactivated successfully. Nov 28 05:08:08 localhost podman[321102]: 2025-11-28 10:08:08.001678237 +0000 UTC m=+0.095250981 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:08:08 localhost podman[321089]: 2025-11-28 10:08:08.03350323 +0000 UTC m=+0.138491726 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true) Nov 28 05:08:08 localhost podman[321102]: 2025-11-28 10:08:08.042879659 +0000 UTC m=+0.136452443 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:08:08 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:08:08 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:08:08 localhost podman[321090]: 2025-11-28 10:08:08.093253174 +0000 UTC m=+0.189752658 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:08:08 localhost podman[321088]: 2025-11-28 10:08:08.047172492 +0000 UTC m=+0.151917381 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 05:08:08 localhost podman[321090]: 2025-11-28 10:08:08.125421557 +0000 UTC m=+0.221921021 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:08:08 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:08:08 localhost podman[321088]: 2025-11-28 10:08:08.180312161 +0000 UTC m=+0.285057030 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:08:08 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:08:08 localhost nova_compute[280168]: 2025-11-28 10:08:08.209 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:08.429 2 INFO neutron.agent.securitygroups_rpc [None req-9e0fb103-8bda-4ed9-896b-20548b225439 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['cc6c7909-68f3-4243-ad85-ca295b324967']#033[00m Nov 28 05:08:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:08.643 2 INFO neutron.agent.securitygroups_rpc [None req-5e0e412c-d85c-446d-af13-157bbc4d1b94 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['723b43a6-7c6c-4eb3-a519-023b34d9a2b5']#033[00m Nov 28 05:08:08 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:08.900 2 INFO neutron.agent.securitygroups_rpc [None req-aa17c84c-96b9-4f58-abde-f1675a957a10 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['723b43a6-7c6c-4eb3-a519-023b34d9a2b5']#033[00m Nov 28 05:08:09 localhost podman[321187]: 2025-11-28 10:08:09.041471666 +0000 UTC m=+0.057886519 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS) Nov 28 05:08:09 localhost dnsmasq[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/addn_hosts - 0 addresses Nov 28 05:08:09 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/host Nov 28 05:08:09 localhost dnsmasq-dhcp[316830]: read /var/lib/neutron/dhcp/b2c4ac07-8851-40d3-9495-d0489b67c4c3/opts Nov 28 05:08:09 localhost ovn_controller[152726]: 2025-11-28T10:08:09Z|00191|binding|INFO|Releasing lport 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 from this chassis (sb_readonly=0) Nov 28 05:08:09 localhost ovn_controller[152726]: 2025-11-28T10:08:09Z|00192|binding|INFO|Setting lport 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 down in Southbound Nov 28 05:08:09 localhost nova_compute[280168]: 2025-11-28 10:08:09.232 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:09 localhost kernel: device tap0eb8bf5c-38 left promiscuous mode Nov 28 05:08:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:09.246 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005538515.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp2a75ae47-09f3-5db4-9c67-86b6e0e7c804-b2c4ac07-8851-40d3-9495-d0489b67c4c3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b2c4ac07-8851-40d3-9495-d0489b67c4c3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd3c0d1ce8d854a7b9ffc953e88cd2c44', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005538515.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=940d6739-e1d9-4dcd-a724-785ba886c2af, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6) old=Port_Binding(up=[True], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:08:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:09.248 158530 INFO neutron.agent.ovn.metadata.agent [-] Port 0eb8bf5c-382d-4ca7-a3fe-a3c5250492c6 in datapath b2c4ac07-8851-40d3-9495-d0489b67c4c3 unbound from our chassis#033[00m Nov 28 05:08:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:09.251 158530 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b2c4ac07-8851-40d3-9495-d0489b67c4c3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m Nov 28 05:08:09 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:09.252 261619 DEBUG oslo.privsep.daemon [-] privsep: reply[cbadec4e-4b81-4c68-a408-8faa1724be15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m Nov 28 05:08:09 localhost nova_compute[280168]: 2025-11-28 10:08:09.261 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:09 localhost nova_compute[280168]: 2025-11-28 10:08:09.262 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:09 localhost nova_compute[280168]: 2025-11-28 10:08:09.556 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:09 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:09.713 2 INFO neutron.agent.securitygroups_rpc [None req-e693dfaa-acf9-4ef6-91f0-fa32111d34d5 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['89d85f46-caf0-4632-8f88-6aa2b20ffab5']#033[00m Nov 28 05:08:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56764117-bba7-4a1d-bc16-3de8a089b757", "format": "json"}]: dispatch Nov 28 05:08:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:56764117-bba7-4a1d-bc16-3de8a089b757, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:56764117-bba7-4a1d-bc16-3de8a089b757, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:09 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56764117-bba7-4a1d-bc16-3de8a089b757' of type subvolume Nov 28 05:08:09 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:09.778+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '56764117-bba7-4a1d-bc16-3de8a089b757' of type subvolume Nov 28 05:08:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56764117-bba7-4a1d-bc16-3de8a089b757", "force": true, "format": "json"}]: dispatch Nov 28 05:08:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56764117-bba7-4a1d-bc16-3de8a089b757, vol_name:cephfs) < "" Nov 28 05:08:09 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/56764117-bba7-4a1d-bc16-3de8a089b757'' moved to trashcan Nov 28 05:08:09 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:08:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:56764117-bba7-4a1d-bc16-3de8a089b757, vol_name:cephfs) < "" Nov 28 05:08:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 170 active+clean; 146 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 30 KiB/s wr, 61 op/s Nov 28 05:08:09 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:09.972 2 INFO neutron.agent.securitygroups_rpc [None req-40ac5e44-c334-4848-ad22-bdb0d1d393a9 f56d2237e5b74576a33d9840c9346817 9ce143270a4649669232b53b6a44e4ba - - default default] Security group member updated ['f3e50b86-f5a6-4339-897f-e9e754c264f3']#033[00m Nov 28 05:08:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:10.035 2 INFO neutron.agent.securitygroups_rpc [None req-aa8fcbbc-e15f-465d-a79b-54ba8e5d4dfa 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['89d85f46-caf0-4632-8f88-6aa2b20ffab5']#033[00m Nov 28 05:08:10 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:10.766 2 INFO neutron.agent.securitygroups_rpc [None req-652590f9-d0df-4db1-90c9-4125d049cb03 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['527f33ea-6583-4033-be5c-a5d3ccd20912']#033[00m Nov 28 05:08:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "2c0f5770-a785-4c87-af5d-f92c8abce21d_c59c3d50-8922-4eb8-a42f-dbb4854b184b", "force": true, "format": "json"}]: dispatch Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c0f5770-a785-4c87-af5d-f92c8abce21d_c59c3d50-8922-4eb8-a42f-dbb4854b184b, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:10 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c0f5770-a785-4c87-af5d-f92c8abce21d_c59c3d50-8922-4eb8-a42f-dbb4854b184b, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "2c0f5770-a785-4c87-af5d-f92c8abce21d", "force": true, "format": "json"}]: dispatch Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c0f5770-a785-4c87-af5d-f92c8abce21d, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:10 localhost systemd[1]: tmp-crun.7A9SCX.mount: Deactivated successfully. Nov 28 05:08:10 localhost podman[321210]: 2025-11-28 10:08:10.970263247 +0000 UTC m=+0.077726190 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:11 localhost podman[321210]: 2025-11-28 10:08:11.012449149 +0000 UTC m=+0.119912082 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:08:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:11.265 2 INFO neutron.agent.securitygroups_rpc [None req-8e5272cc-6346-4fdc-ba9b-b2622fce7146 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['527f33ea-6583-4033-be5c-a5d3ccd20912']#033[00m Nov 28 05:08:11 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:08:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2c0f5770-a785-4c87-af5d-f92c8abce21d, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e217 e217: 6 total, 6 up, 6 in Nov 28 05:08:11 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:08:11 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:08:11 localhost podman[321250]: 2025-11-28 10:08:11.373876986 +0000 UTC m=+0.039752498 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:08:11 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:08:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e218 e218: 6 total, 6 up, 6 in Nov 28 05:08:11 localhost nova_compute[280168]: 2025-11-28 10:08:11.533 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:11 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:11.742 2 INFO neutron.agent.securitygroups_rpc [None req-fc93531f-0f28-40b2-be64-5c8ae1a05f2f 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['f4a575b2-e757-4f17-8902-ce29566c2707']#033[00m Nov 28 05:08:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 170 active+clean; 146 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 76 KiB/s wr, 126 op/s Nov 28 05:08:12 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:12.431 2 INFO neutron.agent.securitygroups_rpc [None req-950a4a21-ae98-4893-86d1-a721665d75e4 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['f4a575b2-e757-4f17-8902-ce29566c2707']#033[00m Nov 28 05:08:12 localhost podman[321287]: 2025-11-28 10:08:12.593426903 +0000 UTC m=+0.055072932 container kill a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:08:12 localhost dnsmasq[316830]: exiting on receipt of SIGTERM Nov 28 05:08:12 localhost systemd[1]: libpod-a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1.scope: Deactivated successfully. Nov 28 05:08:12 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:12.622 2 INFO neutron.agent.securitygroups_rpc [None req-70e65ce9-39cb-464e-830b-530df9be0aa7 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['609542d6-94ac-4376-bfbb-29a5ac4d9008']#033[00m Nov 28 05:08:12 localhost podman[321300]: 2025-11-28 10:08:12.674885298 +0000 UTC m=+0.063603205 container died a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:08:12 localhost systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1-userdata-shm.mount: Deactivated successfully. Nov 28 05:08:12 localhost podman[321300]: 2025-11-28 10:08:12.706830414 +0000 UTC m=+0.095548281 container cleanup a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:08:12 localhost systemd[1]: libpod-conmon-a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1.scope: Deactivated successfully. Nov 28 05:08:12 localhost podman[321301]: 2025-11-28 10:08:12.744409974 +0000 UTC m=+0.129221460 container remove a39214a4baa8262623303d314b8ed95b71c01a463bc2eabd06aba05950874fd1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b2c4ac07-8851-40d3-9495-d0489b67c4c3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:08:12 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:08:12.839 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:08:12 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:12.934 2 INFO neutron.agent.securitygroups_rpc [None req-75b9de2d-ffc0-4329-a7f5-226897331b9b 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['f4a575b2-e757-4f17-8902-ce29566c2707']#033[00m Nov 28 05:08:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:13.015 2 INFO neutron.agent.securitygroups_rpc [None req-d8139777-22d3-42a1-ac35-78ef0fdf0858 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['609542d6-94ac-4376-bfbb-29a5ac4d9008']#033[00m Nov 28 05:08:13 localhost nova_compute[280168]: 2025-11-28 10:08:13.211 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:13 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:08:13.228 261346 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}#033[00m Nov 28 05:08:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:13.259 2 INFO neutron.agent.securitygroups_rpc [None req-08ef0569-f479-4e5d-8704-2ba0f20ecb11 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['609542d6-94ac-4376-bfbb-29a5ac4d9008']#033[00m Nov 28 05:08:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1770151037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1770151037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:13.394 2 INFO neutron.agent.securitygroups_rpc [None req-a98371b3-db2f-4475-8ff8-c8019a0287c5 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['f4a575b2-e757-4f17-8902-ce29566c2707']#033[00m Nov 28 05:08:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:13.580 2 INFO neutron.agent.securitygroups_rpc [None req-86361f15-5494-4722-ab9a-5dacf7fdc04d 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['609542d6-94ac-4376-bfbb-29a5ac4d9008']#033[00m Nov 28 05:08:13 localhost systemd[1]: var-lib-containers-storage-overlay-a112523a391658be219e8ca2b94928afac8124141e68c4a75a8a0c64ca4d98f3-merged.mount: Deactivated successfully. Nov 28 05:08:13 localhost systemd[1]: run-netns-qdhcp\x2db2c4ac07\x2d8851\x2d40d3\x2d9495\x2dd0489b67c4c3.mount: Deactivated successfully. Nov 28 05:08:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:13.694 2 INFO neutron.agent.securitygroups_rpc [None req-d46ea504-c7ea-4ca3-916c-fe85715f24c7 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['f4a575b2-e757-4f17-8902-ce29566c2707']#033[00m Nov 28 05:08:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 170 active+clean; 146 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 39 KiB/s wr, 54 op/s Nov 28 05:08:13 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:13.895 2 INFO neutron.agent.securitygroups_rpc [None req-865b6df2-96d7-4677-b689-d1af588b7477 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['609542d6-94ac-4376-bfbb-29a5ac4d9008']#033[00m Nov 28 05:08:14 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:14.015 2 INFO neutron.agent.securitygroups_rpc [None req-447348cf-3ac4-498a-af94-5fdba49716f9 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['f4a575b2-e757-4f17-8902-ce29566c2707']#033[00m Nov 28 05:08:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "1ad625c7-a9e5-4671-b5b8-87ed0dc833b6_a8a8759a-ecad-4b03-8e69-d4e30de4d63c", "force": true, "format": "json"}]: dispatch Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ad625c7-a9e5-4671-b5b8-87ed0dc833b6_a8a8759a-ecad-4b03-8e69-d4e30de4d63c, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ad625c7-a9e5-4671-b5b8-87ed0dc833b6_a8a8759a-ecad-4b03-8e69-d4e30de4d63c, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "snap_name": "1ad625c7-a9e5-4671-b5b8-87ed0dc833b6", "force": true, "format": "json"}]: dispatch Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ad625c7-a9e5-4671-b5b8-87ed0dc833b6, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta.tmp' to config b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486/.meta' Nov 28 05:08:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1ad625c7-a9e5-4671-b5b8-87ed0dc833b6, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:14 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:14.128 2 INFO neutron.agent.securitygroups_rpc [None req-e5d09537-743d-43a9-bf9b-ecb2832cda1b 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['609542d6-94ac-4376-bfbb-29a5ac4d9008']#033[00m Nov 28 05:08:14 localhost nova_compute[280168]: 2025-11-28 10:08:14.560 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:08:14 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:14.898 2 INFO neutron.agent.securitygroups_rpc [None req-185e5877-2263-42be-9a4e-22f963823be4 4946fab2fc3e4b6a824c5b97d5395e1f 6aa84c7049ec4634978c2f2583d7c1dd - - default default] Security group rule updated ['dae70bc2-83a0-4e05-bc5e-659aa86d0528']#033[00m Nov 28 05:08:14 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:14.920 2 INFO neutron.agent.securitygroups_rpc [None req-d7a70732-d57d-42ae-a12a-793909186bfd 98015f849ce5423d83599c846511f268 ed1e45c023a5404a97377641c4a6c580 - - default default] Security group rule updated ['d99d3754-6453-4c3f-8498-8ac20a4744c7']#033[00m Nov 28 05:08:14 localhost podman[321328]: 2025-11-28 10:08:14.972444053 +0000 UTC m=+0.079646890 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Nov 28 05:08:15 localhost podman[321328]: 2025-11-28 10:08:15.012631343 +0000 UTC m=+0.119834140 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd) Nov 28 05:08:15 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:08:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e219 e219: 6 total, 6 up, 6 in Nov 28 05:08:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 170 active+clean; 146 MiB data, 923 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 44 KiB/s wr, 61 op/s Nov 28 05:08:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:08:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:08:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' Nov 28 05:08:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta' Nov 28 05:08:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:08:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "format": "json"}]: dispatch Nov 28 05:08:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:08:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e220 e220: 6 total, 6 up, 6 in Nov 28 05:08:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:08:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "format": "json"}]: dispatch Nov 28 05:08:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:17 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486' of type subvolume Nov 28 05:08:17 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:17.480+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486' of type subvolume Nov 28 05:08:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486", "force": true, "format": "json"}]: dispatch Nov 28 05:08:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486'' moved to trashcan Nov 28 05:08:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:08:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7fc9c0d6-3c27-4cc1-b7bd-1c8604d8f486, vol_name:cephfs) < "" Nov 28 05:08:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 146 MiB data, 924 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 68 KiB/s wr, 64 op/s Nov 28 05:08:18 localhost nova_compute[280168]: 2025-11-28 10:08:18.240 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "snap_name": "7f4376e7-ac7c-4743-ba07-48e626bc51c1", "format": "json"}]: dispatch Nov 28 05:08:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:08:19 localhost nova_compute[280168]: 2025-11-28 10:08:19.599 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:08:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 146 MiB data, 924 MiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 21 KiB/s wr, 5 op/s Nov 28 05:08:20 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:20.221 2 INFO neutron.agent.securitygroups_rpc [None req-d053b12b-4cea-4da2-a120-7dbb5ec3bd14 b3ad92f082324bf2b498b6ec57fa1994 f4aa6a98849143efbe0d34d745657eb8 - - default default] Security group rule updated ['b905493a-8ebf-4d2f-8822-0b2d1ac4a85c']#033[00m Nov 28 05:08:20 localhost neutron_sriov_agent[254415]: 2025-11-28 10:08:20.628 2 INFO neutron.agent.securitygroups_rpc [None req-5d5ff3a5-db5c-4c22-9208-8bc209a22601 2d65c21983fa4a008a09c7a8bb7a6484 2603cf17f09846a397a42aba4be9d81b - - default default] Security group rule updated ['90aec1a6-5e99-47c4-8e4c-11b88cdc4ca9']#033[00m Nov 28 05:08:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e221 e221: 6 total, 6 up, 6 in Nov 28 05:08:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v440: 177 pgs: 177 active+clean; 147 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 55 KiB/s wr, 26 op/s Nov 28 05:08:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "snap_name": "7f4376e7-ac7c-4743-ba07-48e626bc51c1", "target_sub_name": "fddc728b-0ca3-4979-8f94-9ca64d777da3", "format": "json"}]: dispatch Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, target_sub_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, vol_name:cephfs) < "" Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta' Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id c532338b-5ba2-4b23-9b32-ea9746e2ddcb for path b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3' Nov 28 05:08:23 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1. Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta' Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, target_sub_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, vol_name:cephfs) < "" Nov 28 05:08:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fddc728b-0ca3-4979-8f94-9ca64d777da3", "format": "json"}]: dispatch Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.221+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.221+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.221+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.221+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.221+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3 Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, fddc728b-0ca3-4979-8f94-9ca64d777da3) Nov 28 05:08:23 localhost nova_compute[280168]: 2025-11-28 10:08:23.251 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.254+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.254+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.254+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.254+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:23.254+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, fddc728b-0ca3-4979-8f94-9ca64d777da3) -- by 0 seconds Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' Nov 28 05:08:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta' Nov 28 05:08:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 147 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 44 KiB/s wr, 20 op/s Nov 28 05:08:24 localhost nova_compute[280168]: 2025-11-28 10:08:24.645 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:24 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/439989088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:24 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/439989088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.snap/7f4376e7-ac7c-4743-ba07-48e626bc51c1/b41cab7a-8985-4dbd-82ff-5c225419ddcd' to b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/132edee1-c864-490c-8dc8-4b93a2d66413' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.clone_index] untracking c532338b-5ba2-4b23-9b32-ea9746e2ddcb Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta.tmp' to config b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3/.meta' Nov 28 05:08:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, fddc728b-0ca3-4979-8f94-9ca64d777da3) Nov 28 05:08:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 147 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 37 KiB/s wr, 17 op/s Nov 28 05:08:26 localhost systemd[1]: tmp-crun.4lHoHl.mount: Deactivated successfully. Nov 28 05:08:26 localhost podman[321481]: 2025-11-28 10:08:26.224986936 +0000 UTC m=+0.105884660 container exec 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, vcs-type=git, version=7, architecture=x86_64, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/agreements, build-date=2025-09-24T08:57:55, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=553, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, io.buildah.version=1.33.12, distribution-scope=public, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, name=rhceph) Nov 28 05:08:26 localhost podman[321481]: 2025-11-28 10:08:26.340529302 +0000 UTC m=+0.221426976 container exec_died 98f7091a3e2ea0e9ed1e630f1e98c8fad1fd276cf7448473db6afc3c103ea45d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-crash-np0005538515, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/agreements, maintainer=Guillaume Abrioux , com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, RELEASE=main, org.opencontainers.image.revision=0c20ee48321f5d64135f6208d1332c0b032df6c3, release=553, io.openshift.expose-services=, build-date=2025-09-24T08:57:55, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://access.redhat.com/containers/#/registry.access.redhat.com/rhceph/images/7-553, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.33.12, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, vcs-ref=cba612d428f1498c8ae5570dd75a670ccf94c03d) Nov 28 05:08:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e222 e222: 6 total, 6 up, 6 in Nov 28 05:08:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 05:08:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 05:08:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 05:08:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 05:08:27 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:08:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 05:08:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 05:08:27 localhost podman[321619]: 2025-11-28 10:08:27.227946626 +0000 UTC m=+0.127555138 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc.) Nov 28 05:08:27 localhost podman[321619]: 2025-11-28 10:08:27.248543032 +0000 UTC m=+0.148151614 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public) Nov 28 05:08:27 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:08:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:27 localhost openstack_network_exporter[240973]: ERROR 10:08:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:08:27 localhost openstack_network_exporter[240973]: ERROR 10:08:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:08:27 localhost openstack_network_exporter[240973]: ERROR 10:08:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:08:27 localhost openstack_network_exporter[240973]: ERROR 10:08:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:08:27 localhost openstack_network_exporter[240973]: Nov 28 05:08:27 localhost openstack_network_exporter[240973]: ERROR 10:08:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:08:27 localhost openstack_network_exporter[240973]: Nov 28 05:08:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 193 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 131 op/s Nov 28 05:08:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) Nov 28 05:08:27 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 05:08:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0) Nov 28 05:08:27 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 05:08:27 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 05:08:27 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 05:08:27 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 28 05:08:27 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:27 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 28 05:08:28 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 05:08:28 localhost ceph-mgr[286188]: [cephadm INFO root] Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 05:08:28 localhost ceph-mgr[286188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) Nov 28 05:08:28 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:28 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:28 localhost ceph-mgr[286188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:28 localhost ceph-mgr[286188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:08:28 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 2eb1df9b-02ce-4916-a757-2204b382ab74 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:08:28 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 2eb1df9b-02ce-4916-a757-2204b382ab74 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:08:28 localhost ceph-mgr[286188]: [progress INFO root] Completed event 2eb1df9b-02ce-4916-a757-2204b382ab74 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:08:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:08:28 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:08:28 localhost nova_compute[280168]: 2025-11-28 10:08:28.303 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:08:28 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:28 localhost podman[239012]: time="2025-11-28T10:08:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:08:28 localhost podman[239012]: @ - - [28/Nov/2025:10:08:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:08:28 localhost podman[239012]: @ - - [28/Nov/2025:10:08:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1" Nov 28 05:08:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e223 e223: 6 total, 6 up, 6 in Nov 28 05:08:29 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538514.localdomain to 836.6M Nov 28 05:08:29 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538514.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:29 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538515.localdomain to 836.6M Nov 28 05:08:29 localhost ceph-mon[301134]: Adjusting osd_memory_target on np0005538513.localdomain to 836.6M Nov 28 05:08:29 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538515.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:29 localhost ceph-mon[301134]: Unable to set osd_memory_target on np0005538513.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096 Nov 28 05:08:29 localhost nova_compute[280168]: 2025-11-28 10:08:29.651 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v446: 177 pgs: 177 active+clean; 193 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 2.7 MiB/s wr, 115 op/s Nov 28 05:08:30 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:08:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:08:31 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:08:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 123 KiB/s rd, 2.7 MiB/s wr, 183 op/s Nov 28 05:08:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:32 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1386400267' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:32 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1386400267' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e224 e224: 6 total, 6 up, 6 in Nov 28 05:08:33 localhost nova_compute[280168]: 2025-11-28 10:08:33.341 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:33 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e225 e225: 6 total, 6 up, 6 in Nov 28 05:08:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 18 KiB/s wr, 91 op/s Nov 28 05:08:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 05:08:34 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 13K writes, 52K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4194 syncs, 3.16 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8169 writes, 30K keys, 8169 commit groups, 1.0 writes per commit group, ingest: 23.49 MB, 0.04 MB/s#012Interval WAL: 8169 writes, 3432 syncs, 2.38 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 05:08:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:34 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3132541285' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:34 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3132541285' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e226 e226: 6 total, 6 up, 6 in Nov 28 05:08:34 localhost nova_compute[280168]: 2025-11-28 10:08:34.682 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e227 e227: 6 total, 6 up, 6 in Nov 28 05:08:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:08:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:08:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:08:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:08:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:08:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:08:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v453: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail Nov 28 05:08:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e228 e228: 6 total, 6 up, 6 in Nov 28 05:08:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 8.2 KiB/s wr, 170 op/s Nov 28 05:08:38 localhost nova_compute[280168]: 2025-11-28 10:08:38.368 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 05:08:38 localhost ceph-osd[33334]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.2 total, 600.0 interval#012Cumulative writes: 17K writes, 66K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 17K writes, 5746 syncs, 3.07 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 27.02 MB, 0.05 MB/s#012Interval WAL: 11K writes, 4966 syncs, 2.37 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent Nov 28 05:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:08:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:08:38 localhost podman[321709]: 2025-11-28 10:08:38.984129846 +0000 UTC m=+0.083894871 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 05:08:38 localhost podman[321709]: 2025-11-28 10:08:38.998407447 +0000 UTC m=+0.098172492 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true) Nov 28 05:08:39 localhost systemd[1]: tmp-crun.42PQNT.mount: Deactivated successfully. Nov 28 05:08:39 localhost podman[321710]: 2025-11-28 10:08:39.045095148 +0000 UTC m=+0.144836382 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible) Nov 28 05:08:39 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:08:39 localhost podman[321710]: 2025-11-28 10:08:39.086573329 +0000 UTC m=+0.186314553 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:08:39 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:08:39 localhost podman[321712]: 2025-11-28 10:08:39.138348887 +0000 UTC m=+0.231213618 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:08:39 localhost podman[321712]: 2025-11-28 10:08:39.146793467 +0000 UTC m=+0.239658168 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:08:39 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:08:39 localhost podman[321711]: 2025-11-28 10:08:39.195385518 +0000 UTC m=+0.291861340 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:08:39 localhost podman[321711]: 2025-11-28 10:08:39.226012204 +0000 UTC m=+0.322488016 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:08:39 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:08:39 localhost nova_compute[280168]: 2025-11-28 10:08:39.719 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e229 e229: 6 total, 6 up, 6 in Nov 28 05:08:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v457: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 101 KiB/s rd, 6.7 KiB/s wr, 138 op/s Nov 28 05:08:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e230 e230: 6 total, 6 up, 6 in Nov 28 05:08:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e231 e231: 6 total, 6 up, 6 in Nov 28 05:08:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 5.3 KiB/s wr, 160 op/s Nov 28 05:08:41 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:08:41 localhost podman[321795]: 2025-11-28 10:08:41.965775639 +0000 UTC m=+0.073477000 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:08:41 localhost podman[321795]: 2025-11-28 10:08:41.977353136 +0000 UTC m=+0.085054487 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:08:41 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:08:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cdb1ad02-8773-41eb-84cf-933676ec61c7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cdb1ad02-8773-41eb-84cf-933676ec61c7/.meta.tmp' Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cdb1ad02-8773-41eb-84cf-933676ec61c7/.meta.tmp' to config b'/volumes/_nogroup/cdb1ad02-8773-41eb-84cf-933676ec61c7/.meta' Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cdb1ad02-8773-41eb-84cf-933676ec61c7", "format": "json"}]: dispatch Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "53e82a45-a05e-4381-85a3-03f5d3eecad9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/53e82a45-a05e-4381-85a3-03f5d3eecad9/.meta.tmp' Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/53e82a45-a05e-4381-85a3-03f5d3eecad9/.meta.tmp' to config b'/volumes/_nogroup/53e82a45-a05e-4381-85a3-03f5d3eecad9/.meta' Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "53e82a45-a05e-4381-85a3-03f5d3eecad9", "format": "json"}]: dispatch Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, vol_name:cephfs) < "" Nov 28 05:08:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e232 e232: 6 total, 6 up, 6 in Nov 28 05:08:43 localhost nova_compute[280168]: 2025-11-28 10:08:43.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:43 localhost ovn_controller[152726]: 2025-11-28T10:08:43Z|00193|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Nov 28 05:08:43 localhost nova_compute[280168]: 2025-11-28 10:08:43.421 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 119 KiB/s rd, 5.3 KiB/s wr, 161 op/s Nov 28 05:08:44 localhost nova_compute[280168]: 2025-11-28 10:08:44.722 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e233 e233: 6 total, 6 up, 6 in Nov 28 05:08:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:08:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta.tmp' Nov 28 05:08:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta.tmp' to config b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta' Nov 28 05:08:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "format": "json"}]: dispatch Nov 28 05:08:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 147 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 96 KiB/s rd, 4.3 KiB/s wr, 130 op/s Nov 28 05:08:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e234 e234: 6 total, 6 up, 6 in Nov 28 05:08:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:08:45 localhost podman[321818]: 2025-11-28 10:08:45.983552056 +0000 UTC m=+0.084221652 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:08:45 localhost podman[321818]: 2025-11-28 10:08:45.998499067 +0000 UTC m=+0.099168713 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd) Nov 28 05:08:46 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:08:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "53e82a45-a05e-4381-85a3-03f5d3eecad9", "format": "json"}]: dispatch Nov 28 05:08:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:46 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53e82a45-a05e-4381-85a3-03f5d3eecad9' of type subvolume Nov 28 05:08:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:46.243+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '53e82a45-a05e-4381-85a3-03f5d3eecad9' of type subvolume Nov 28 05:08:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "53e82a45-a05e-4381-85a3-03f5d3eecad9", "force": true, "format": "json"}]: dispatch Nov 28 05:08:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, vol_name:cephfs) < "" Nov 28 05:08:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/53e82a45-a05e-4381-85a3-03f5d3eecad9'' moved to trashcan Nov 28 05:08:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:08:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:53e82a45-a05e-4381-85a3-03f5d3eecad9, vol_name:cephfs) < "" Nov 28 05:08:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e235 e235: 6 total, 6 up, 6 in Nov 28 05:08:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:47 localhost nova_compute[280168]: 2025-11-28 10:08:47.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:47 localhost nova_compute[280168]: 2025-11-28 10:08:47.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:47 localhost nova_compute[280168]: 2025-11-28 10:08:47.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:08:47 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e236 e236: 6 total, 6 up, 6 in Nov 28 05:08:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 147 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 62 KiB/s wr, 112 op/s Nov 28 05:08:48 localhost nova_compute[280168]: 2025-11-28 10:08:48.241 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:48 localhost nova_compute[280168]: 2025-11-28 10:08:48.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:08:48 localhost nova_compute[280168]: 2025-11-28 10:08:48.241 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:08:48 localhost nova_compute[280168]: 2025-11-28 10:08:48.262 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:08:48 localhost nova_compute[280168]: 2025-11-28 10:08:48.262 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:48.411 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:08:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:48.412 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:08:48 localhost nova_compute[280168]: 2025-11-28 10:08:48.446 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:49 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "snap_name": "b751b79e-578e-4a27-88ca-ab5a9cb2b343", "format": "json"}]: dispatch Nov 28 05:08:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b751b79e-578e-4a27-88ca-ab5a9cb2b343, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:b751b79e-578e-4a27-88ca-ab5a9cb2b343, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.255 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.256 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.256 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.256 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.257 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:08:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e237 e237: 6 total, 6 up, 6 in Nov 28 05:08:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:08:49 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1833255642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.696 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.762 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 147 MiB data, 933 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 62 KiB/s wr, 112 op/s Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.955 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.956 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11499MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.957 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:08:49 localhost nova_compute[280168]: 2025-11-28 10:08:49.957 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.033 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.034 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.059 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:08:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:50.414 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:08:50 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:08:50 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/936247826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.573 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.578 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.596 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.597 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:08:50 localhost nova_compute[280168]: 2025-11-28 10:08:50.598 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:08:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:50.852 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:08:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:50.853 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:08:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:08:50.854 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:08:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e238 e238: 6 total, 6 up, 6 in Nov 28 05:08:51 localhost nova_compute[280168]: 2025-11-28 10:08:51.600 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:51 localhost nova_compute[280168]: 2025-11-28 10:08:51.600 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 147 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 117 KiB/s rd, 70 KiB/s wr, 164 op/s Nov 28 05:08:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:08:51 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2163301190' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:08:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:08:51 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2163301190' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:08:52 localhost nova_compute[280168]: 2025-11-28 10:08:52.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:08:53 localhost nova_compute[280168]: 2025-11-28 10:08:53.449 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v473: 177 pgs: 177 active+clean; 147 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 21 KiB/s wr, 68 op/s Nov 28 05:08:54 localhost nova_compute[280168]: 2025-11-28 10:08:54.806 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "snap_name": "b751b79e-578e-4a27-88ca-ab5a9cb2b343_a9f5af9b-568e-446f-a65a-029b17668afd", "force": true, "format": "json"}]: dispatch Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b751b79e-578e-4a27-88ca-ab5a9cb2b343_a9f5af9b-568e-446f-a65a-029b17668afd, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta.tmp' Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta.tmp' to config b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta' Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b751b79e-578e-4a27-88ca-ab5a9cb2b343_a9f5af9b-568e-446f-a65a-029b17668afd, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "snap_name": "b751b79e-578e-4a27-88ca-ab5a9cb2b343", "force": true, "format": "json"}]: dispatch Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b751b79e-578e-4a27-88ca-ab5a9cb2b343, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta.tmp' Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta.tmp' to config b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb/.meta' Nov 28 05:08:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:b751b79e-578e-4a27-88ca-ab5a9cb2b343, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 147 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 16 KiB/s wr, 54 op/s Nov 28 05:08:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e239 e239: 6 total, 6 up, 6 in Nov 28 05:08:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:08:57 localhost openstack_network_exporter[240973]: ERROR 10:08:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:08:57 localhost openstack_network_exporter[240973]: ERROR 10:08:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:08:57 localhost openstack_network_exporter[240973]: ERROR 10:08:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:08:57 localhost openstack_network_exporter[240973]: ERROR 10:08:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:08:57 localhost openstack_network_exporter[240973]: Nov 28 05:08:57 localhost openstack_network_exporter[240973]: ERROR 10:08:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:08:57 localhost openstack_network_exporter[240973]: Nov 28 05:08:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v476: 177 pgs: 177 active+clean; 147 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 31 KiB/s wr, 83 op/s Nov 28 05:08:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:08:57 localhost systemd[1]: tmp-crun.TgvAe0.mount: Deactivated successfully. Nov 28 05:08:57 localhost podman[321881]: 2025-11-28 10:08:57.971525861 +0000 UTC m=+0.083085686 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 05:08:58 localhost podman[321881]: 2025-11-28 10:08:58.013521068 +0000 UTC m=+0.125080893 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 05:08:58 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:08:58 localhost nova_compute[280168]: 2025-11-28 10:08:58.452 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "format": "json"}]: dispatch Nov 28 05:08:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:08:58 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:08:58.669+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b827a5f-531c-4609-b695-d7ea3eea20eb' of type subvolume Nov 28 05:08:58 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b827a5f-531c-4609-b695-d7ea3eea20eb' of type subvolume Nov 28 05:08:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b827a5f-531c-4609-b695-d7ea3eea20eb", "force": true, "format": "json"}]: dispatch Nov 28 05:08:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4b827a5f-531c-4609-b695-d7ea3eea20eb'' moved to trashcan Nov 28 05:08:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:08:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b827a5f-531c-4609-b695-d7ea3eea20eb, vol_name:cephfs) < "" Nov 28 05:08:58 localhost podman[239012]: time="2025-11-28T10:08:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:08:58 localhost podman[239012]: @ - - [28/Nov/2025:10:08:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:08:58 localhost podman[239012]: @ - - [28/Nov/2025:10:08:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19217 "" "Go-http-client/1.1" Nov 28 05:08:59 localhost nova_compute[280168]: 2025-11-28 10:08:59.808 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:08:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 147 MiB data, 934 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 29 KiB/s wr, 80 op/s Nov 28 05:09:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e240 e240: 6 total, 6 up, 6 in Nov 28 05:09:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v479: 177 pgs: 177 active+clean; 147 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 29 KiB/s wr, 65 op/s Nov 28 05:09:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cdb1ad02-8773-41eb-84cf-933676ec61c7", "format": "json"}]: dispatch Nov 28 05:09:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:01 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:01.940+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cdb1ad02-8773-41eb-84cf-933676ec61c7' of type subvolume Nov 28 05:09:01 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cdb1ad02-8773-41eb-84cf-933676ec61c7' of type subvolume Nov 28 05:09:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cdb1ad02-8773-41eb-84cf-933676ec61c7", "force": true, "format": "json"}]: dispatch Nov 28 05:09:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, vol_name:cephfs) < "" Nov 28 05:09:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cdb1ad02-8773-41eb-84cf-933676ec61c7'' moved to trashcan Nov 28 05:09:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cdb1ad02-8773-41eb-84cf-933676ec61c7, vol_name:cephfs) < "" Nov 28 05:09:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:09:02 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4047621474' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:09:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:02 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2136588701' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:02 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2136588701' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:02 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' Nov 28 05:09:02 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta' Nov 28 05:09:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "format": "json"}]: dispatch Nov 28 05:09:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e241 e241: 6 total, 6 up, 6 in Nov 28 05:09:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fddc728b-0ca3-4979-8f94-9ca64d777da3", "format": "json"}]: dispatch Nov 28 05:09:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:03 localhost nova_compute[280168]: 2025-11-28 10:09:03.497 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e242 e242: 6 total, 6 up, 6 in Nov 28 05:09:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 147 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 19 KiB/s wr, 47 op/s Nov 28 05:09:04 localhost nova_compute[280168]: 2025-11-28 10:09:04.848 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fddc728b-0ca3-4979-8f94-9ca64d777da3", "format": "json"}]: dispatch Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, vol_name:cephfs) < "" Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, vol_name:cephfs) < "" Nov 28 05:09:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:09:05 Nov 28 05:09:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:09:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:09:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['volumes', 'manila_metadata', 'images', 'vms', 'manila_data', 'backups', '.mgr'] Nov 28 05:09:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:09:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e243 e243: 6 total, 6 up, 6 in Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:09:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "snap_name": "7121dfd5-9b25-4dc7-904e-3be06d03c952", "format": "json"}]: dispatch Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7121dfd5-9b25-4dc7-904e-3be06d03c952, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7121dfd5-9b25-4dc7-904e-3be06d03c952, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v484: 177 pgs: 177 active+clean; 147 MiB data, 935 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 22 KiB/s wr, 54 op/s Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 3.453319839940443e-06 of space, bias 1.0, pg target 0.0006895128613747751 quantized to 32 (current 32) Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:09:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00014422154173646007 of space, bias 4.0, pg target 0.11480034722222221 quantized to 16 (current 16) Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:09:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:09:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e244 e244: 6 total, 6 up, 6 in Nov 28 05:09:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fddc728b-0ca3-4979-8f94-9ca64d777da3", "format": "json"}]: dispatch Nov 28 05:09:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fddc728b-0ca3-4979-8f94-9ca64d777da3", "force": true, "format": "json"}]: dispatch Nov 28 05:09:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, vol_name:cephfs) < "" Nov 28 05:09:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fddc728b-0ca3-4979-8f94-9ca64d777da3'' moved to trashcan Nov 28 05:09:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fddc728b-0ca3-4979-8f94-9ca64d777da3, vol_name:cephfs) < "" Nov 28 05:09:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e245 e245: 6 total, 6 up, 6 in Nov 28 05:09:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v487: 177 pgs: 177 active+clean; 148 MiB data, 944 MiB used, 41 GiB / 42 GiB avail; 142 KiB/s rd, 65 KiB/s wr, 200 op/s Nov 28 05:09:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:08 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3077193037' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:08 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3077193037' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:08 localhost nova_compute[280168]: 2025-11-28 10:09:08.525 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "snap_name": "24807f53-77c4-4f15-8496-e0fd980f52a8", "format": "json"}]: dispatch Nov 28 05:09:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:24807f53-77c4-4f15-8496-e0fd980f52a8, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:24807f53-77c4-4f15-8496-e0fd980f52a8, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 148 MiB data, 944 MiB used, 41 GiB / 42 GiB avail; 101 KiB/s rd, 46 KiB/s wr, 142 op/s Nov 28 05:09:09 localhost nova_compute[280168]: 2025-11-28 10:09:09.891 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:09:09 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:09:10 localhost podman[321902]: 2025-11-28 10:09:10.014891348 +0000 UTC m=+0.093073824 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125) Nov 28 05:09:10 localhost podman[321903]: 2025-11-28 10:09:10.072905629 +0000 UTC m=+0.144529813 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3) Nov 28 05:09:10 localhost podman[321903]: 2025-11-28 10:09:10.081321489 +0000 UTC m=+0.152945703 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:09:10 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:09:10 localhost podman[321902]: 2025-11-28 10:09:10.102036508 +0000 UTC m=+0.180218984 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:09:10 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:09:10 localhost podman[321901]: 2025-11-28 10:09:10.118489136 +0000 UTC m=+0.200317664 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:09:10 localhost podman[321901]: 2025-11-28 10:09:10.132776177 +0000 UTC m=+0.214604705 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:09:10 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:09:10 localhost podman[321909]: 2025-11-28 10:09:10.227666296 +0000 UTC m=+0.294131541 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:09:10 localhost podman[321909]: 2025-11-28 10:09:10.261874452 +0000 UTC m=+0.328339747 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:09:10 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:09:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "snap_name": "7f4376e7-ac7c-4743-ba07-48e626bc51c1_2561ce9f-a8b9-43a9-9ce8-c591d939acc0", "force": true, "format": "json"}]: dispatch Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1_2561ce9f-a8b9-43a9-9ce8-c591d939acc0, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta' Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1_2561ce9f-a8b9-43a9-9ce8-c591d939acc0, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:09:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "snap_name": "7f4376e7-ac7c-4743-ba07-48e626bc51c1", "force": true, "format": "json"}]: dispatch Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta.tmp' to config b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf/.meta' Nov 28 05:09:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7f4376e7-ac7c-4743-ba07-48e626bc51c1, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:09:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e246 e246: 6 total, 6 up, 6 in Nov 28 05:09:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 85 KiB/s wr, 199 op/s Nov 28 05:09:12 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:09:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "snap_name": "24807f53-77c4-4f15-8496-e0fd980f52a8_43874ef7-c24e-4d18-b524-79d36cc3d273", "force": true, "format": "json"}]: dispatch Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24807f53-77c4-4f15-8496-e0fd980f52a8_43874ef7-c24e-4d18-b524-79d36cc3d273, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta' Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24807f53-77c4-4f15-8496-e0fd980f52a8_43874ef7-c24e-4d18-b524-79d36cc3d273, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "snap_name": "24807f53-77c4-4f15-8496-e0fd980f52a8", "force": true, "format": "json"}]: dispatch Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24807f53-77c4-4f15-8496-e0fd980f52a8, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' Nov 28 05:09:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta' Nov 28 05:09:12 localhost systemd[1]: tmp-crun.UYMlDo.mount: Deactivated successfully. Nov 28 05:09:12 localhost podman[321986]: 2025-11-28 10:09:12.997426948 +0000 UTC m=+0.104766636 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:24807f53-77c4-4f15-8496-e0fd980f52a8, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:13 localhost podman[321986]: 2025-11-28 10:09:13.010473381 +0000 UTC m=+0.117813119 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:09:13 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:09:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/803796492' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/803796492' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:13 localhost nova_compute[280168]: 2025-11-28 10:09:13.566 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 69 KiB/s wr, 162 op/s Nov 28 05:09:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "format": "json"}]: dispatch Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f08fd6d7-e632-4600-b126-7d3287d90baf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f08fd6d7-e632-4600-b126-7d3287d90baf, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:13 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f08fd6d7-e632-4600-b126-7d3287d90baf' of type subvolume Nov 28 05:09:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:13.908+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f08fd6d7-e632-4600-b126-7d3287d90baf' of type subvolume Nov 28 05:09:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f08fd6d7-e632-4600-b126-7d3287d90baf", "force": true, "format": "json"}]: dispatch Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f08fd6d7-e632-4600-b126-7d3287d90baf'' moved to trashcan Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f08fd6d7-e632-4600-b126-7d3287d90baf, vol_name:cephfs) < "" Nov 28 05:09:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e247 e247: 6 total, 6 up, 6 in Nov 28 05:09:14 localhost nova_compute[280168]: 2025-11-28 10:09:14.894 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e248 e248: 6 total, 6 up, 6 in Nov 28 05:09:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 38 KiB/s wr, 57 op/s Nov 28 05:09:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "snap_name": "7121dfd5-9b25-4dc7-904e-3be06d03c952_0622dbc9-fa1f-43ca-acfe-551c9624e3ef", "force": true, "format": "json"}]: dispatch Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7121dfd5-9b25-4dc7-904e-3be06d03c952_0622dbc9-fa1f-43ca-acfe-551c9624e3ef, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta' Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7121dfd5-9b25-4dc7-904e-3be06d03c952_0622dbc9-fa1f-43ca-acfe-551c9624e3ef, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "snap_name": "7121dfd5-9b25-4dc7-904e-3be06d03c952", "force": true, "format": "json"}]: dispatch Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7121dfd5-9b25-4dc7-904e-3be06d03c952, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta.tmp' to config b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34/.meta' Nov 28 05:09:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7121dfd5-9b25-4dc7-904e-3be06d03c952, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e249 e249: 6 total, 6 up, 6 in Nov 28 05:09:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:09:16 localhost podman[322009]: 2025-11-28 10:09:16.99720563 +0000 UTC m=+0.105468357 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:09:17 localhost podman[322009]: 2025-11-28 10:09:17.034584214 +0000 UTC m=+0.142846951 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible) Nov 28 05:09:17 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:09:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:17 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3754312141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:17 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3754312141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:09:17.454 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:09:17Z, description=, device_id=ca028b75-13e6-4080-861d-94b31335eb39, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=e72cdf7d-f7d9-4850-a363-66db0614dee5, ip_allocation=immediate, mac_address=fa:16:3e:6a:c4:de, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3153, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:09:17Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:09:17 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:09:17 localhost podman[322045]: 2025-11-28 10:09:17.719870439 +0000 UTC m=+0.059220240 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 05:09:17 localhost systemd[1]: tmp-crun.7EPmxP.mount: Deactivated successfully. Nov 28 05:09:17 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:09:17 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:09:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 117 KiB/s rd, 3.6 MiB/s wr, 171 op/s Nov 28 05:09:17 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:09:17.917 261346 INFO neutron.agent.dhcp.agent [None req-ea411aaf-ab83-42be-ba66-34c6532be3a6 - - - - - -] DHCP configuration for ports {'e72cdf7d-f7d9-4850-a363-66db0614dee5'} is completed#033[00m Nov 28 05:09:18 localhost nova_compute[280168]: 2025-11-28 10:09:18.569 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:18 localhost nova_compute[280168]: 2025-11-28 10:09:18.694 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "format": "json"}]: dispatch Nov 28 05:09:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:775dc90b-162d-43b3-b906-8b2d7da70c34, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:775dc90b-162d-43b3-b906-8b2d7da70c34, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:19 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:19.481+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '775dc90b-162d-43b3-b906-8b2d7da70c34' of type subvolume Nov 28 05:09:19 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '775dc90b-162d-43b3-b906-8b2d7da70c34' of type subvolume Nov 28 05:09:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "775dc90b-162d-43b3-b906-8b2d7da70c34", "force": true, "format": "json"}]: dispatch Nov 28 05:09:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/775dc90b-162d-43b3-b906-8b2d7da70c34'' moved to trashcan Nov 28 05:09:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:775dc90b-162d-43b3-b906-8b2d7da70c34, vol_name:cephfs) < "" Nov 28 05:09:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e250 e250: 6 total, 6 up, 6 in Nov 28 05:09:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 194 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 133 KiB/s rd, 4.1 MiB/s wr, 195 op/s Nov 28 05:09:19 localhost nova_compute[280168]: 2025-11-28 10:09:19.908 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e251 e251: 6 total, 6 up, 6 in Nov 28 05:09:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e252 e252: 6 total, 6 up, 6 in Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0. Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.760736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324561760847, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2894, "num_deletes": 281, "total_data_size": 4761968, "memory_usage": 4840160, "flush_reason": "Manual Compaction"} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324561783515, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 3108429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23971, "largest_seqno": 26860, "table_properties": {"data_size": 3096625, "index_size": 7669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27914, "raw_average_key_size": 22, "raw_value_size": 3072092, "raw_average_value_size": 2520, "num_data_blocks": 320, "num_entries": 1219, "num_filter_entries": 1219, "num_deletions": 281, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324437, "oldest_key_time": 1764324437, "file_creation_time": 1764324561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 22835 microseconds, and 7691 cpu microseconds. Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.783578) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 3108429 bytes OK Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.783609) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.786217) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.786251) EVENT_LOG_v1 {"time_micros": 1764324561786242, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.786279) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4748509, prev total WAL file size 4748509, number of live WAL files 2. Nov 28 05:09:21 localhost nova_compute[280168]: 2025-11-28 10:09:21.787 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.789635) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end) Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(3035KB)], [36(17MB)] Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324561789706, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 20953714, "oldest_snapshot_seqno": -1} Nov 28 05:09:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 234 KiB/s rd, 4.1 MiB/s wr, 332 op/s Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 13409 keys, 19616539 bytes, temperature: kUnknown Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324561892509, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 19616539, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19535806, "index_size": 46136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 357758, "raw_average_key_size": 26, "raw_value_size": 19303583, "raw_average_value_size": 1439, "num_data_blocks": 1754, "num_entries": 13409, "num_filter_entries": 13409, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324561, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.892843) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 19616539 bytes Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.911609) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.6 rd, 190.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 17.0 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(13.1) write-amplify(6.3) OK, records in: 13975, records dropped: 566 output_compression: NoCompression Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.911641) EVENT_LOG_v1 {"time_micros": 1764324561911627, "job": 20, "event": "compaction_finished", "compaction_time_micros": 102924, "compaction_time_cpu_micros": 39439, "output_level": 6, "num_output_files": 1, "total_output_size": 19616539, "num_input_records": 13975, "num_output_records": 13409, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324561912260, "job": 20, "event": "table_file_deletion", "file_number": 38} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324561914576, "job": 20, "event": "table_file_deletion", "file_number": 36} Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.789514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.914673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.914679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.914682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.914685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:21 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:21.914687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e253 e253: 6 total, 6 up, 6 in Nov 28 05:09:23 localhost nova_compute[280168]: 2025-11-28 10:09:23.572 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e254 e254: 6 total, 6 up, 6 in Nov 28 05:09:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 139 KiB/s rd, 43 KiB/s wr, 189 op/s Nov 28 05:09:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:24 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/314747336' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:24 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:24 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/314747336' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:24 localhost nova_compute[280168]: 2025-11-28 10:09:24.959 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) Nov 28 05:09:25 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3883756345' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch Nov 28 05:09:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v505: 177 pgs: 177 active+clean; 195 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 34 KiB/s wr, 147 op/s Nov 28 05:09:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e255 e255: 6 total, 6 up, 6 in Nov 28 05:09:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:27 localhost openstack_network_exporter[240973]: ERROR 10:09:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:09:27 localhost openstack_network_exporter[240973]: ERROR 10:09:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:09:27 localhost openstack_network_exporter[240973]: ERROR 10:09:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:09:27 localhost openstack_network_exporter[240973]: ERROR 10:09:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:09:27 localhost openstack_network_exporter[240973]: Nov 28 05:09:27 localhost openstack_network_exporter[240973]: ERROR 10:09:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:09:27 localhost openstack_network_exporter[240973]: Nov 28 05:09:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 251 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 9.4 MiB/s wr, 140 op/s Nov 28 05:09:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:09:28 localhost nova_compute[280168]: 2025-11-28 10:09:28.574 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:28 localhost podman[322083]: 2025-11-28 10:09:28.614234154 +0000 UTC m=+0.103454715 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=) Nov 28 05:09:28 localhost podman[322083]: 2025-11-28 10:09:28.650204033 +0000 UTC m=+0.139424544 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.) Nov 28 05:09:28 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:09:28 localhost podman[239012]: time="2025-11-28T10:09:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:09:28 localhost podman[239012]: @ - - [28/Nov/2025:10:09:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:09:28 localhost podman[239012]: @ - - [28/Nov/2025:10:09:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19211 "" "Go-http-client/1.1" Nov 28 05:09:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:09:29 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:09:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:09:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:09:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:09:29 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 4fe8d159-29c7-4ceb-bfb8-9a34d8ee2669 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:09:29 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 4fe8d159-29c7-4ceb-bfb8-9a34d8ee2669 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:09:29 localhost ceph-mgr[286188]: [progress INFO root] Completed event 4fe8d159-29c7-4ceb-bfb8-9a34d8ee2669 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:09:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:09:29 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:09:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:09:29 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:09:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v508: 177 pgs: 177 active+clean; 251 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 7.9 MiB/s wr, 119 op/s Nov 28 05:09:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:29 localhost nova_compute[280168]: 2025-11-28 10:09:29.964 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:30 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta.tmp' Nov 28 05:09:30 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta.tmp' to config b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta' Nov 28 05:09:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "format": "json"}]: dispatch Nov 28 05:09:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:30 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:09:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:09:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e256 e256: 6 total, 6 up, 6 in Nov 28 05:09:31 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:09:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v510: 177 pgs: 177 active+clean; 687 MiB data, 2.3 GiB used, 40 GiB / 42 GiB avail; 164 KiB/s rd, 62 MiB/s wr, 258 op/s Nov 28 05:09:33 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "snap_name": "7ef5a977-4548-46f4-a4ab-f17fcf8c22a8", "format": "json"}]: dispatch Nov 28 05:09:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7ef5a977-4548-46f4-a4ab-f17fcf8c22a8, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7ef5a977-4548-46f4-a4ab-f17fcf8c22a8, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:33 localhost nova_compute[280168]: 2025-11-28 10:09:33.577 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v511: 177 pgs: 177 active+clean; 687 MiB data, 2.3 GiB used, 40 GiB / 42 GiB avail; 164 KiB/s rd, 62 MiB/s wr, 258 op/s Nov 28 05:09:34 localhost nova_compute[280168]: 2025-11-28 10:09:34.993 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d1cf0525-f942-48d6-9b6c-f05643be68cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, vol_name:cephfs) < "" Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', ), ('cephfs', )] Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d1cf0525-f942-48d6-9b6c-f05643be68cd/.meta.tmp' Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d1cf0525-f942-48d6-9b6c-f05643be68cd/.meta.tmp' to config b'/volumes/_nogroup/d1cf0525-f942-48d6-9b6c-f05643be68cd/.meta' Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, vol_name:cephfs) < "" Nov 28 05:09:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d1cf0525-f942-48d6-9b6c-f05643be68cd", "format": "json"}]: dispatch Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, vol_name:cephfs) < "" Nov 28 05:09:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, vol_name:cephfs) < "" Nov 28 05:09:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e257 e257: 6 total, 6 up, 6 in Nov 28 05:09:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v513: 177 pgs: 177 active+clean; 687 MiB data, 2.3 GiB used, 40 GiB / 42 GiB avail; 90 KiB/s rd, 55 MiB/s wr, 153 op/s Nov 28 05:09:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "snap_name": "7ef5a977-4548-46f4-a4ab-f17fcf8c22a8_bfb7e13e-e3ef-4568-80b6-62eac53f1d23", "force": true, "format": "json"}]: dispatch Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7ef5a977-4548-46f4-a4ab-f17fcf8c22a8_bfb7e13e-e3ef-4568-80b6-62eac53f1d23, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta.tmp' Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta.tmp' to config b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta' Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7ef5a977-4548-46f4-a4ab-f17fcf8c22a8_bfb7e13e-e3ef-4568-80b6-62eac53f1d23, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "snap_name": "7ef5a977-4548-46f4-a4ab-f17fcf8c22a8", "force": true, "format": "json"}]: dispatch Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7ef5a977-4548-46f4-a4ab-f17fcf8c22a8, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta.tmp' Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta.tmp' to config b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb/.meta' Nov 28 05:09:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7ef5a977-4548-46f4-a4ab-f17fcf8c22a8, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 177 active+clean; 1.1 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 178 KiB/s rd, 105 MiB/s wr, 304 op/s Nov 28 05:09:38 localhost nova_compute[280168]: 2025-11-28 10:09:38.579 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:38 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:38 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, vol_name:cephfs) < "" Nov 28 05:09:38 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7/.meta.tmp' Nov 28 05:09:38 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7/.meta.tmp' to config b'/volumes/_nogroup/f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7/.meta' Nov 28 05:09:38 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, vol_name:cephfs) < "" Nov 28 05:09:38 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7", "format": "json"}]: dispatch Nov 28 05:09:38 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, vol_name:cephfs) < "" Nov 28 05:09:38 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, vol_name:cephfs) < "" Nov 28 05:09:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "format": "json"}]: dispatch Nov 28 05:09:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:06689989-6341-4053-b4cd-67b6bbd3acbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:06689989-6341-4053-b4cd-67b6bbd3acbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:39 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:39.659+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06689989-6341-4053-b4cd-67b6bbd3acbb' of type subvolume Nov 28 05:09:39 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '06689989-6341-4053-b4cd-67b6bbd3acbb' of type subvolume Nov 28 05:09:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "06689989-6341-4053-b4cd-67b6bbd3acbb", "force": true, "format": "json"}]: dispatch Nov 28 05:09:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/06689989-6341-4053-b4cd-67b6bbd3acbb'' moved to trashcan Nov 28 05:09:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:06689989-6341-4053-b4cd-67b6bbd3acbb, vol_name:cephfs) < "" Nov 28 05:09:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v515: 177 pgs: 177 active+clean; 1.1 GiB data, 3.5 GiB used, 38 GiB / 42 GiB avail; 171 KiB/s rd, 101 MiB/s wr, 292 op/s Nov 28 05:09:40 localhost nova_compute[280168]: 2025-11-28 10:09:40.048 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:09:40 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:09:40 localhost systemd[1]: tmp-crun.oURLOR.mount: Deactivated successfully. Nov 28 05:09:41 localhost podman[322172]: 2025-11-28 10:09:41.001897007 +0000 UTC m=+0.104957841 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true) Nov 28 05:09:41 localhost podman[322171]: 2025-11-28 10:09:41.044497933 +0000 UTC m=+0.151595581 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:09:41 localhost podman[322172]: 2025-11-28 10:09:41.06581257 +0000 UTC m=+0.168873434 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:09:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e258 e258: 6 total, 6 up, 6 in Nov 28 05:09:41 localhost podman[322173]: 2025-11-28 10:09:41.110524701 +0000 UTC m=+0.207939260 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent) Nov 28 05:09:41 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:09:41 localhost podman[322171]: 2025-11-28 10:09:41.135018777 +0000 UTC m=+0.242116475 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible) Nov 28 05:09:41 localhost podman[322173]: 2025-11-28 10:09:41.145546932 +0000 UTC m=+0.242961491 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:09:41 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:09:41 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:09:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:41 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1184348018' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:41 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1184348018' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:41 localhost podman[322178]: 2025-11-28 10:09:41.209576429 +0000 UTC m=+0.296869505 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:09:41 localhost podman[322178]: 2025-11-28 10:09:41.240665868 +0000 UTC m=+0.327958944 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:09:41 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:09:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v517: 177 pgs: 177 active+clean; 195 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 200 KiB/s rd, 67 MiB/s wr, 343 op/s Nov 28 05:09:41 localhost systemd[1]: tmp-crun.oxXDaq.mount: Deactivated successfully. Nov 28 05:09:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d1cf0525-f942-48d6-9b6c-f05643be68cd", "format": "json"}]: dispatch Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:42 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:42.262+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1cf0525-f942-48d6-9b6c-f05643be68cd' of type subvolume Nov 28 05:09:42 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd1cf0525-f942-48d6-9b6c-f05643be68cd' of type subvolume Nov 28 05:09:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d1cf0525-f942-48d6-9b6c-f05643be68cd", "force": true, "format": "json"}]: dispatch Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, vol_name:cephfs) < "" Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d1cf0525-f942-48d6-9b6c-f05643be68cd'' moved to trashcan Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d1cf0525-f942-48d6-9b6c-f05643be68cd, vol_name:cephfs) < "" Nov 28 05:09:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e259 e259: 6 total, 6 up, 6 in Nov 28 05:09:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "6640fd55-9a20-4243-a340-7d5b72774834", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6640fd55-9a20-4243-a340-7d5b72774834, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/6640fd55-9a20-4243-a340-7d5b72774834/.meta.tmp' Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/6640fd55-9a20-4243-a340-7d5b72774834/.meta.tmp' to config b'/volumes/_nogroup/6640fd55-9a20-4243-a340-7d5b72774834/.meta' Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:6640fd55-9a20-4243-a340-7d5b72774834, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "6640fd55-9a20-4243-a340-7d5b72774834", "format": "json"}]: dispatch Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6640fd55-9a20-4243-a340-7d5b72774834, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:6640fd55-9a20-4243-a340-7d5b72774834, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7", "format": "json"}]: dispatch Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:43.326+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7' of type subvolume Nov 28 05:09:43 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7' of type subvolume Nov 28 05:09:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7", "force": true, "format": "json"}]: dispatch Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, vol_name:cephfs) < "" Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7'' moved to trashcan Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f220ee3f-2e2c-40b6-a8f5-63ee9a01d3d7, vol_name:cephfs) < "" Nov 28 05:09:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:09:43 localhost nova_compute[280168]: 2025-11-28 10:09:43.582 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:43 localhost podman[322254]: 2025-11-28 10:09:43.63983946 +0000 UTC m=+0.083842590 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:09:43 localhost podman[322254]: 2025-11-28 10:09:43.674012455 +0000 UTC m=+0.118015565 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:09:43 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:09:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 195 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 200 KiB/s rd, 67 MiB/s wr, 343 op/s Nov 28 05:09:44 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e260 e260: 6 total, 6 up, 6 in Nov 28 05:09:45 localhost nova_compute[280168]: 2025-11-28 10:09:45.093 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:45 localhost nova_compute[280168]: 2025-11-28 10:09:45.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v521: 177 pgs: 177 active+clean; 195 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 149 KiB/s rd, 21 MiB/s wr, 256 op/s Nov 28 05:09:46 localhost nova_compute[280168]: 2025-11-28 10:09:46.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "6640fd55-9a20-4243-a340-7d5b72774834", "format": "json"}]: dispatch Nov 28 05:09:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:6640fd55-9a20-4243-a340-7d5b72774834, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:6640fd55-9a20-4243-a340-7d5b72774834, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:46.383+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6640fd55-9a20-4243-a340-7d5b72774834' of type subvolume Nov 28 05:09:46 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '6640fd55-9a20-4243-a340-7d5b72774834' of type subvolume Nov 28 05:09:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "6640fd55-9a20-4243-a340-7d5b72774834", "force": true, "format": "json"}]: dispatch Nov 28 05:09:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6640fd55-9a20-4243-a340-7d5b72774834, vol_name:cephfs) < "" Nov 28 05:09:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/6640fd55-9a20-4243-a340-7d5b72774834'' moved to trashcan Nov 28 05:09:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:6640fd55-9a20-4243-a340-7d5b72774834, vol_name:cephfs) < "" Nov 28 05:09:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e261 e261: 6 total, 6 up, 6 in Nov 28 05:09:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:47 localhost nova_compute[280168]: 2025-11-28 10:09:47.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:09:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 93 KiB/s wr, 72 op/s Nov 28 05:09:47 localhost podman[322277]: 2025-11-28 10:09:47.997314524 +0000 UTC m=+0.098938895 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:09:48 localhost podman[322277]: 2025-11-28 10:09:48.014640279 +0000 UTC m=+0.116264640 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:09:48 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:09:48 localhost nova_compute[280168]: 2025-11-28 10:09:48.585 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:48 localhost nova_compute[280168]: 2025-11-28 10:09:48.701 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:09:48.703 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:09:48 localhost ovn_metadata_agent[158525]: 2025-11-28 10:09:48.704 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:09:49 localhost nova_compute[280168]: 2025-11-28 10:09:49.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:49 localhost nova_compute[280168]: 2025-11-28 10:09:49.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:09:49 localhost nova_compute[280168]: 2025-11-28 10:09:49.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:09:49 localhost nova_compute[280168]: 2025-11-28 10:09:49.279 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:09:49 localhost nova_compute[280168]: 2025-11-28 10:09:49.279 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:49 localhost nova_compute[280168]: 2025-11-28 10:09:49.280 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:09:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:49 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1498043172' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:49 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1498043172' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:09:49.706 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:09:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 2.8 MiB/s rd, 77 KiB/s wr, 59 op/s Nov 28 05:09:50 localhost nova_compute[280168]: 2025-11-28 10:09:50.141 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:50 localhost nova_compute[280168]: 2025-11-28 10:09:50.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:09:50 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' Nov 28 05:09:50 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta' Nov 28 05:09:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:09:50 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e262 e262: 6 total, 6 up, 6 in Nov 28 05:09:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "format": "json"}]: dispatch Nov 28 05:09:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:09:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:09:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:09:50.853 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:09:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:09:50.853 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:09:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:09:50.854 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.257 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.258 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.258 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.258 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.259 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:09:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e263 e263: 6 total, 6 up, 6 in Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0. Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.594598) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324591594637, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 810, "num_deletes": 255, "total_data_size": 1350948, "memory_usage": 1366608, "flush_reason": "Manual Compaction"} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324591601350, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 812869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26865, "largest_seqno": 27670, "table_properties": {"data_size": 809169, "index_size": 1427, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10137, "raw_average_key_size": 21, "raw_value_size": 801282, "raw_average_value_size": 1734, "num_data_blocks": 62, "num_entries": 462, "num_filter_entries": 462, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324562, "oldest_key_time": 1764324562, "file_creation_time": 1764324591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 6787 microseconds, and 1961 cpu microseconds. Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.601383) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 812869 bytes OK Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.601402) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.602877) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.602892) EVENT_LOG_v1 {"time_micros": 1764324591602888, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.602907) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1346523, prev total WAL file size 1346523, number of live WAL files 2. Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.603305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303131' seq:72057594037927935, type:22 .. '6D6772737461740034323632' seq:0, type:0; will stop at (end) Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(793KB)], [39(18MB)] Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324591603337, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 20429408, "oldest_snapshot_seqno": -1} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 13350 keys, 18386783 bytes, temperature: kUnknown Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324591714545, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 18386783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18310465, "index_size": 41849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33413, "raw_key_size": 357110, "raw_average_key_size": 26, "raw_value_size": 18083398, "raw_average_value_size": 1354, "num_data_blocks": 1573, "num_entries": 13350, "num_filter_entries": 13350, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324591, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.714819) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 18386783 bytes Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.717283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.5 rd, 165.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 18.7 +0.0 blob) out(17.5 +0.0 blob), read-write-amplify(47.8) write-amplify(22.6) OK, records in: 13871, records dropped: 521 output_compression: NoCompression Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.717312) EVENT_LOG_v1 {"time_micros": 1764324591717299, "job": 22, "event": "compaction_finished", "compaction_time_micros": 111305, "compaction_time_cpu_micros": 26681, "output_level": 6, "num_output_files": 1, "total_output_size": 18386783, "num_input_records": 13871, "num_output_records": 13350, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324591717624, "job": 22, "event": "table_file_deletion", "file_number": 41} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324591720350, "job": 22, "event": "table_file_deletion", "file_number": 39} Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.603264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.720442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.720448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.720451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.720454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:51 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:09:51.720457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:09:51 localhost ovn_controller[152726]: 2025-11-28T10:09:51Z|00194|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Nov 28 05:09:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:09:51 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2570637429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:09:51 localhost nova_compute[280168]: 2025-11-28 10:09:51.805 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:09:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "11696cb6-6ed6-4708-887e-84f5b86051e8", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:11696cb6-6ed6-4708-887e-84f5b86051e8, vol_name:cephfs) < "" Nov 28 05:09:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/11696cb6-6ed6-4708-887e-84f5b86051e8/.meta.tmp' Nov 28 05:09:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/11696cb6-6ed6-4708-887e-84f5b86051e8/.meta.tmp' to config b'/volumes/_nogroup/11696cb6-6ed6-4708-887e-84f5b86051e8/.meta' Nov 28 05:09:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:11696cb6-6ed6-4708-887e-84f5b86051e8, vol_name:cephfs) < "" Nov 28 05:09:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "11696cb6-6ed6-4708-887e-84f5b86051e8", "format": "json"}]: dispatch Nov 28 05:09:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:11696cb6-6ed6-4708-887e-84f5b86051e8, vol_name:cephfs) < "" Nov 28 05:09:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:11696cb6-6ed6-4708-887e-84f5b86051e8, vol_name:cephfs) < "" Nov 28 05:09:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 177 active+clean; 242 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 283 op/s Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.021 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.023 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11492MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.023 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.024 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.079 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.080 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.102 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:09:52 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:09:52 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3487273525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.548 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.555 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.647 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.651 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:09:52 localhost nova_compute[280168]: 2025-11-28 10:09:52.651 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:09:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:09:53 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1704733216' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:09:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:09:53 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1704733216' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:09:53 localhost nova_compute[280168]: 2025-11-28 10:09:53.588 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:53 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "snap_name": "1f3bf784-193d-4af9-98c3-6a3e9518c295", "format": "json"}]: dispatch Nov 28 05:09:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:09:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:09:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 177 active+clean; 242 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 3.0 MiB/s wr, 232 op/s Nov 28 05:09:54 localhost nova_compute[280168]: 2025-11-28 10:09:54.654 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:09:54 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e264 e264: 6 total, 6 up, 6 in Nov 28 05:09:55 localhost nova_compute[280168]: 2025-11-28 10:09:55.152 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "584bbedb-3694-4e91-aa03-2dfba40587ca", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:584bbedb-3694-4e91-aa03-2dfba40587ca, vol_name:cephfs) < "" Nov 28 05:09:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/584bbedb-3694-4e91-aa03-2dfba40587ca/.meta.tmp' Nov 28 05:09:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/584bbedb-3694-4e91-aa03-2dfba40587ca/.meta.tmp' to config b'/volumes/_nogroup/584bbedb-3694-4e91-aa03-2dfba40587ca/.meta' Nov 28 05:09:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:584bbedb-3694-4e91-aa03-2dfba40587ca, vol_name:cephfs) < "" Nov 28 05:09:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "584bbedb-3694-4e91-aa03-2dfba40587ca", "format": "json"}]: dispatch Nov 28 05:09:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:584bbedb-3694-4e91-aa03-2dfba40587ca, vol_name:cephfs) < "" Nov 28 05:09:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:584bbedb-3694-4e91-aa03-2dfba40587ca, vol_name:cephfs) < "" Nov 28 05:09:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 242 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 151 KiB/s rd, 3.6 MiB/s wr, 211 op/s Nov 28 05:09:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e265 e265: 6 total, 6 up, 6 in Nov 28 05:09:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:09:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/.meta.tmp' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/.meta.tmp' to config b'/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/.meta' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "format": "json"}]: dispatch Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "snap_name": "1f3bf784-193d-4af9-98c3-6a3e9518c295", "target_sub_name": "b9fd2bda-5757-4a94-8506-9ee54ffddc78", "format": "json"}]: dispatch Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, target_sub_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 53c12f79-a432-40cd-ba04-128957beb093 for path b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, target_sub_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9fd2bda-5757-4a94-8506-9ee54ffddc78", "format": "json"}]: dispatch Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.567+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.567+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.567+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.567+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.567+0000 7fcc8c452640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:57 localhost openstack_network_exporter[240973]: ERROR 10:09:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:09:57 localhost openstack_network_exporter[240973]: ERROR 10:09:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:09:57 localhost openstack_network_exporter[240973]: ERROR 10:09:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:09:57 localhost openstack_network_exporter[240973]: ERROR 10:09:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:09:57 localhost openstack_network_exporter[240973]: Nov 28 05:09:57 localhost openstack_network_exporter[240973]: ERROR 10:09:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:09:57 localhost openstack_network_exporter[240973]: Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78 Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, b9fd2bda-5757-4a94-8506-9ee54ffddc78) Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.604+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.604+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.605+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.605+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:09:57.605+0000 7fcc8d454640 -1 client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: client.0 error registering admin socket command: (17) File exists Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, b9fd2bda-5757-4a94-8506-9ee54ffddc78) -- by 0 seconds Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' Nov 28 05:09:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta' Nov 28 05:09:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v532: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 194 KiB/s rd, 3.4 MiB/s wr, 274 op/s Nov 28 05:09:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "11696cb6-6ed6-4708-887e-84f5b86051e8", "format": "json"}]: dispatch Nov 28 05:09:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:11696cb6-6ed6-4708-887e-84f5b86051e8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:09:58 localhost nova_compute[280168]: 2025-11-28 10:09:58.592 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:09:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e266 e266: 6 total, 6 up, 6 in Nov 28 05:09:58 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:09:58 localhost podman[239012]: time="2025-11-28T10:09:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:09:58 localhost podman[239012]: @ - - [28/Nov/2025:10:09:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:09:58 localhost podman[239012]: @ - - [28/Nov/2025:10:09:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19225 "" "Go-http-client/1.1" Nov 28 05:09:59 localhost podman[322364]: 2025-11-28 10:09:59.023355756 +0000 UTC m=+0.133066459 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1755695350, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9) Nov 28 05:09:59 localhost podman[322364]: 2025-11-28 10:09:59.037395189 +0000 UTC m=+0.147105912 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 05:09:59 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:09:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 196 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 52 KiB/s wr, 77 op/s Nov 28 05:10:00 localhost nova_compute[280168]: 2025-11-28 10:10:00.187 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:11696cb6-6ed6-4708-887e-84f5b86051e8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '11696cb6-6ed6-4708-887e-84f5b86051e8' of type subvolume Nov 28 05:10:00 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:00.238+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '11696cb6-6ed6-4708-887e-84f5b86051e8' of type subvolume Nov 28 05:10:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "11696cb6-6ed6-4708-887e-84f5b86051e8", "force": true, "format": "json"}]: dispatch Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.snap/1f3bf784-193d-4af9-98c3-6a3e9518c295/22b0f8fa-9094-41fb-91cd-d34c00640de0' to b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/7237c419-af52-4e4f-b121-86fdcc449f6e' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:11696cb6-6ed6-4708-887e-84f5b86051e8, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/11696cb6-6ed6-4708-887e-84f5b86051e8'' moved to trashcan Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:11696cb6-6ed6-4708-887e-84f5b86051e8, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.clone_index] untracking 53c12f79-a432-40cd-ba04-128957beb093 Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta.tmp' to config b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78/.meta' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, b9fd2bda-5757-4a94-8506-9ee54ffddc78) Nov 28 05:10:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c7b9c682-dd90-4895-89e5-edc4a14b470f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c7b9c682-dd90-4895-89e5-edc4a14b470f/.meta.tmp' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c7b9c682-dd90-4895-89e5-edc4a14b470f/.meta.tmp' to config b'/volumes/_nogroup/c7b9c682-dd90-4895-89e5-edc4a14b470f/.meta' Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c7b9c682-dd90-4895-89e5-edc4a14b470f", "format": "json"}]: dispatch Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.627 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:10:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:10:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve49", "tenant_id": "ed59ec099bfe470982dfd8309e19126f", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, tenant_id:ed59ec099bfe470982dfd8309e19126f, vol_name:cephfs) < "" Nov 28 05:10:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Nov 28 05:10:00 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 28 05:10:00 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID eve49 with tenant ed59ec099bfe470982dfd8309e19126f Nov 28 05:10:00 localhost ceph-mon[301134]: overall HEALTH_OK Nov 28 05:10:00 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 28 05:10:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:00 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, tenant_id:ed59ec099bfe470982dfd8309e19126f, vol_name:cephfs) < "" Nov 28 05:10:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "584bbedb-3694-4e91-aa03-2dfba40587ca", "format": "json"}]: dispatch Nov 28 05:10:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:584bbedb-3694-4e91-aa03-2dfba40587ca, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:584bbedb-3694-4e91-aa03-2dfba40587ca, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:01 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:01.803+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '584bbedb-3694-4e91-aa03-2dfba40587ca' of type subvolume Nov 28 05:10:01 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '584bbedb-3694-4e91-aa03-2dfba40587ca' of type subvolume Nov 28 05:10:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "584bbedb-3694-4e91-aa03-2dfba40587ca", "force": true, "format": "json"}]: dispatch Nov 28 05:10:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:584bbedb-3694-4e91-aa03-2dfba40587ca, vol_name:cephfs) < "" Nov 28 05:10:01 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:01 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:01 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/584bbedb-3694-4e91-aa03-2dfba40587ca'' moved to trashcan Nov 28 05:10:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:584bbedb-3694-4e91-aa03-2dfba40587ca, vol_name:cephfs) < "" Nov 28 05:10:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 128 KiB/s wr, 160 op/s Nov 28 05:10:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9fd2bda-5757-4a94-8506-9ee54ffddc78", "format": "json"}]: dispatch Nov 28 05:10:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:03 localhost nova_compute[280168]: 2025-11-28 10:10:03.594 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:03 localhost nova_compute[280168]: 2025-11-28 10:10:03.596 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m Nov 28 05:10:03 localhost podman[322399]: 2025-11-28 10:10:03.608905949 +0000 UTC m=+0.070628981 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125) Nov 28 05:10:03 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:10:03 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:10:03 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:10:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 115 KiB/s wr, 144 op/s Nov 28 05:10:05 localhost nova_compute[280168]: 2025-11-28 10:10:05.233 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9fd2bda-5757-4a94-8506-9ee54ffddc78", "format": "json"}]: dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "df1f50b2-116f-4913-b966-9e6fb632edd2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df1f50b2-116f-4913-b966-9e6fb632edd2, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/df1f50b2-116f-4913-b966-9e6fb632edd2/.meta.tmp' Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/df1f50b2-116f-4913-b966-9e6fb632edd2/.meta.tmp' to config b'/volumes/_nogroup/df1f50b2-116f-4913-b966-9e6fb632edd2/.meta' Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:df1f50b2-116f-4913-b966-9e6fb632edd2, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "df1f50b2-116f-4913-b966-9e6fb632edd2", "format": "json"}]: dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df1f50b2-116f-4913-b966-9e6fb632edd2, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:df1f50b2-116f-4913-b966-9e6fb632edd2, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c7b9c682-dd90-4895-89e5-edc4a14b470f", "format": "json"}]: dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:05.572+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7b9c682-dd90-4895-89e5-edc4a14b470f' of type subvolume Nov 28 05:10:05 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c7b9c682-dd90-4895-89e5-edc4a14b470f' of type subvolume Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c7b9c682-dd90-4895-89e5-edc4a14b470f", "force": true, "format": "json"}]: dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c7b9c682-dd90-4895-89e5-edc4a14b470f'' moved to trashcan Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c7b9c682-dd90-4895-89e5-edc4a14b470f, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:10:05 Nov 28 05:10:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:10:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:10:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['images', 'manila_metadata', 'manila_data', 'volumes', 'backups', '.mgr', 'vms'] Nov 28 05:10:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve48", "tenant_id": "ed59ec099bfe470982dfd8309e19126f", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, tenant_id:ed59ec099bfe470982dfd8309e19126f, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Nov 28 05:10:05 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 28 05:10:05 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID eve48 with tenant ed59ec099bfe470982dfd8309e19126f Nov 28 05:10:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:05 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:10:05 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 28 05:10:05 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:05 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:05 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 99 KiB/s wr, 124 op/s Nov 28 05:10:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, tenant_id:ed59ec099bfe470982dfd8309e19126f, vol_name:cephfs) < "" Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 3.271566164154104e-06 of space, bias 1.0, pg target 0.0006510416666666666 quantized to 32 (current 32) Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:10:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0003465133828866555 of space, bias 4.0, pg target 0.2758246527777778 quantized to 16 (current 16) Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:10:05 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:10:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e267 e267: 6 total, 6 up, 6 in Nov 28 05:10:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9fd2bda-5757-4a94-8506-9ee54ffddc78", "format": "json"}]: dispatch Nov 28 05:10:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9fd2bda-5757-4a94-8506-9ee54ffddc78", "force": true, "format": "json"}]: dispatch Nov 28 05:10:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, vol_name:cephfs) < "" Nov 28 05:10:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b9fd2bda-5757-4a94-8506-9ee54ffddc78'' moved to trashcan Nov 28 05:10:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9fd2bda-5757-4a94-8506-9ee54ffddc78, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "503f8ba3-8dec-4f60-af76-593096ff9b7e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/503f8ba3-8dec-4f60-af76-593096ff9b7e/.meta.tmp' Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/503f8ba3-8dec-4f60-af76-593096ff9b7e/.meta.tmp' to config b'/volumes/_nogroup/503f8ba3-8dec-4f60-af76-593096ff9b7e/.meta' Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "503f8ba3-8dec-4f60-af76-593096ff9b7e", "format": "json"}]: dispatch Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve48", "format": "json"}]: dispatch Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0) Nov 28 05:10:07 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 28 05:10:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0) Nov 28 05:10:07 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve48", "format": "json"}]: dispatch Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509 Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Nov 28 05:10:07 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch Nov 28 05:10:07 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch Nov 28 05:10:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished Nov 28 05:10:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 123 KiB/s wr, 81 op/s Nov 28 05:10:08 localhost nova_compute[280168]: 2025-11-28 10:10:08.597 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4122e8d3-d0ce-4fba-8b4f-9622dd23c08b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, vol_name:cephfs) < "" Nov 28 05:10:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 197 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 112 KiB/s wr, 74 op/s Nov 28 05:10:09 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4122e8d3-d0ce-4fba-8b4f-9622dd23c08b/.meta.tmp' Nov 28 05:10:09 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4122e8d3-d0ce-4fba-8b4f-9622dd23c08b/.meta.tmp' to config b'/volumes/_nogroup/4122e8d3-d0ce-4fba-8b4f-9622dd23c08b/.meta' Nov 28 05:10:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, vol_name:cephfs) < "" Nov 28 05:10:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4122e8d3-d0ce-4fba-8b4f-9622dd23c08b", "format": "json"}]: dispatch Nov 28 05:10:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, vol_name:cephfs) < "" Nov 28 05:10:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "snap_name": "1f3bf784-193d-4af9-98c3-6a3e9518c295_c90956cb-9363-45be-b7cc-986dc86d427c", "force": true, "format": "json"}]: dispatch Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295_c90956cb-9363-45be-b7cc-986dc86d427c, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta' Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295_c90956cb-9363-45be-b7cc-986dc86d427c, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "snap_name": "1f3bf784-193d-4af9-98c3-6a3e9518c295", "force": true, "format": "json"}]: dispatch Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta.tmp' to config b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179/.meta' Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:1f3bf784-193d-4af9-98c3-6a3e9518c295, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:10:10 localhost nova_compute[280168]: 2025-11-28 10:10:10.260 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve47", "tenant_id": "ed59ec099bfe470982dfd8309e19126f", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, tenant_id:ed59ec099bfe470982dfd8309e19126f, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Nov 28 05:10:10 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 28 05:10:10 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID eve47 with tenant ed59ec099bfe470982dfd8309e19126f Nov 28 05:10:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:10 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:10 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 28 05:10:10 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:10 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:10 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509", "osd", "allow rw pool=manila_data namespace=fsvolumens_aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, tenant_id:ed59ec099bfe470982dfd8309e19126f, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "503f8ba3-8dec-4f60-af76-593096ff9b7e", "format": "json"}]: dispatch Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '503f8ba3-8dec-4f60-af76-593096ff9b7e' of type subvolume Nov 28 05:10:10 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:10.853+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '503f8ba3-8dec-4f60-af76-593096ff9b7e' of type subvolume Nov 28 05:10:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "503f8ba3-8dec-4f60-af76-593096ff9b7e", "force": true, "format": "json"}]: dispatch Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, vol_name:cephfs) < "" Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/503f8ba3-8dec-4f60-af76-593096ff9b7e'' moved to trashcan Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:503f8ba3-8dec-4f60-af76-593096ff9b7e, vol_name:cephfs) < "" Nov 28 05:10:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:10:11 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:10:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 122 KiB/s wr, 13 op/s Nov 28 05:10:11 localhost podman[322420]: 2025-11-28 10:10:11.966336761 +0000 UTC m=+0.078705120 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 05:10:12 localhost podman[322421]: 2025-11-28 10:10:11.989347501 +0000 UTC m=+0.092886808 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller) Nov 28 05:10:12 localhost podman[322428]: 2025-11-28 10:10:12.053652107 +0000 UTC m=+0.147942669 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:10:12 localhost podman[322428]: 2025-11-28 10:10:12.0615481 +0000 UTC m=+0.155838682 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:10:12 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:10:12 localhost podman[322421]: 2025-11-28 10:10:12.074692356 +0000 UTC m=+0.178231693 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible) Nov 28 05:10:12 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:10:12 localhost podman[322427]: 2025-11-28 10:10:12.025161497 +0000 UTC m=+0.122682418 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:10:12 localhost podman[322427]: 2025-11-28 10:10:12.158661778 +0000 UTC m=+0.256182689 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 05:10:12 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:10:12 localhost podman[322420]: 2025-11-28 10:10:12.179607795 +0000 UTC m=+0.291976154 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 05:10:12 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:10:12 localhost systemd[1]: tmp-crun.g49Pzs.mount: Deactivated successfully. Nov 28 05:10:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "format": "json"}]: dispatch Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:13.291+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3469c6cd-cd93-4e38-8faa-549a0ddf9179' of type subvolume Nov 28 05:10:13 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3469c6cd-cd93-4e38-8faa-549a0ddf9179' of type subvolume Nov 28 05:10:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3469c6cd-cd93-4e38-8faa-549a0ddf9179", "force": true, "format": "json"}]: dispatch Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3469c6cd-cd93-4e38-8faa-549a0ddf9179'' moved to trashcan Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3469c6cd-cd93-4e38-8faa-549a0ddf9179, vol_name:cephfs) < "" Nov 28 05:10:13 localhost nova_compute[280168]: 2025-11-28 10:10:13.600 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4122e8d3-d0ce-4fba-8b4f-9622dd23c08b", "format": "json"}]: dispatch Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:13.888+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4122e8d3-d0ce-4fba-8b4f-9622dd23c08b' of type subvolume Nov 28 05:10:13 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4122e8d3-d0ce-4fba-8b4f-9622dd23c08b' of type subvolume Nov 28 05:10:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4122e8d3-d0ce-4fba-8b4f-9622dd23c08b", "force": true, "format": "json"}]: dispatch Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, vol_name:cephfs) < "" Nov 28 05:10:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 122 KiB/s wr, 13 op/s Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4122e8d3-d0ce-4fba-8b4f-9622dd23c08b'' moved to trashcan Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4122e8d3-d0ce-4fba-8b4f-9622dd23c08b, vol_name:cephfs) < "" Nov 28 05:10:13 localhost podman[322502]: 2025-11-28 10:10:13.990305881 +0000 UTC m=+0.094893871 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:10:14 localhost podman[322502]: 2025-11-28 10:10:14.024875158 +0000 UTC m=+0.129463128 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:10:14 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:10:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a26a558f-6d92-48d1-91b7-33af52872ba0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a26a558f-6d92-48d1-91b7-33af52872ba0, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a26a558f-6d92-48d1-91b7-33af52872ba0/.meta.tmp' Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a26a558f-6d92-48d1-91b7-33af52872ba0/.meta.tmp' to config b'/volumes/_nogroup/a26a558f-6d92-48d1-91b7-33af52872ba0/.meta' Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a26a558f-6d92-48d1-91b7-33af52872ba0, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a26a558f-6d92-48d1-91b7-33af52872ba0", "format": "json"}]: dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a26a558f-6d92-48d1-91b7-33af52872ba0, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a26a558f-6d92-48d1-91b7-33af52872ba0, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta.tmp' Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta.tmp' to config b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta' Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "format": "json"}]: dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve47", "format": "json"}]: dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0) Nov 28 05:10:14 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 28 05:10:14 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0) Nov 28 05:10:14 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve47", "format": "json"}]: dispatch Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509 Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:14 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:14 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Nov 28 05:10:14 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch Nov 28 05:10:14 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch Nov 28 05:10:14 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished Nov 28 05:10:15 localhost nova_compute[280168]: 2025-11-28 10:10:15.291 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e268 e268: 6 total, 6 up, 6 in Nov 28 05:10:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v544: 177 pgs: 177 active+clean; 198 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 989 B/s rd, 131 KiB/s wr, 14 op/s Nov 28 05:10:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "snap_name": "3d88a300-3326-420b-858f-9b926d8029b3", "format": "json"}]: dispatch Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3d88a300-3326-420b-858f-9b926d8029b3, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:3d88a300-3326-420b-858f-9b926d8029b3, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v545: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 125 KiB/s wr, 14 op/s Nov 28 05:10:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a26a558f-6d92-48d1-91b7-33af52872ba0", "format": "json"}]: dispatch Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a26a558f-6d92-48d1-91b7-33af52872ba0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a26a558f-6d92-48d1-91b7-33af52872ba0, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:17 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:17.959+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a26a558f-6d92-48d1-91b7-33af52872ba0' of type subvolume Nov 28 05:10:17 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a26a558f-6d92-48d1-91b7-33af52872ba0' of type subvolume Nov 28 05:10:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a26a558f-6d92-48d1-91b7-33af52872ba0", "force": true, "format": "json"}]: dispatch Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a26a558f-6d92-48d1-91b7-33af52872ba0, vol_name:cephfs) < "" Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a26a558f-6d92-48d1-91b7-33af52872ba0'' moved to trashcan Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a26a558f-6d92-48d1-91b7-33af52872ba0, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve49", "format": "json"}]: dispatch Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0) Nov 28 05:10:18 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 28 05:10:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0) Nov 28 05:10:18 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "auth_id": "eve49", "format": "json"}]: dispatch Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9/42613c17-5566-4a72-88af-f970b5fd2509 Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "format": "json"}]: dispatch Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:18.481+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aee96a4c-0a14-47e6-b8e5-ce0279118ec9' of type subvolume Nov 28 05:10:18 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aee96a4c-0a14-47e6-b8e5-ce0279118ec9' of type subvolume Nov 28 05:10:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aee96a4c-0a14-47e6-b8e5-ce0279118ec9", "force": true, "format": "json"}]: dispatch Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aee96a4c-0a14-47e6-b8e5-ce0279118ec9'' moved to trashcan Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aee96a4c-0a14-47e6-b8e5-ce0279118ec9, vol_name:cephfs) < "" Nov 28 05:10:18 localhost nova_compute[280168]: 2025-11-28 10:10:18.601 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:18 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Nov 28 05:10:18 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch Nov 28 05:10:18 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch Nov 28 05:10:18 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished Nov 28 05:10:18 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:10:18 localhost podman[322527]: 2025-11-28 10:10:18.974976336 +0000 UTC m=+0.083519119 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:10:18 localhost podman[322527]: 2025-11-28 10:10:18.989455053 +0000 UTC m=+0.097997856 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0) Nov 28 05:10:19 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:10:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 198 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 125 KiB/s wr, 14 op/s Nov 28 05:10:20 localhost nova_compute[280168]: 2025-11-28 10:10:20.326 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "snap_name": "3d88a300-3326-420b-858f-9b926d8029b3_22da53bc-1abc-41d9-a746-84c48ea05590", "force": true, "format": "json"}]: dispatch Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d88a300-3326-420b-858f-9b926d8029b3_22da53bc-1abc-41d9-a746-84c48ea05590, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta.tmp' Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta.tmp' to config b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta' Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d88a300-3326-420b-858f-9b926d8029b3_22da53bc-1abc-41d9-a746-84c48ea05590, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "snap_name": "3d88a300-3326-420b-858f-9b926d8029b3", "force": true, "format": "json"}]: dispatch Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d88a300-3326-420b-858f-9b926d8029b3, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta.tmp' Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta.tmp' to config b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f/.meta' Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:3d88a300-3326-420b-858f-9b926d8029b3, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d8f7d257-b798-4b3d-88f0-c0bbfa330aa3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d8f7d257-b798-4b3d-88f0-c0bbfa330aa3/.meta.tmp' Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d8f7d257-b798-4b3d-88f0-c0bbfa330aa3/.meta.tmp' to config b'/volumes/_nogroup/d8f7d257-b798-4b3d-88f0-c0bbfa330aa3/.meta' Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d8f7d257-b798-4b3d-88f0-c0bbfa330aa3", "format": "json"}]: dispatch Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, vol_name:cephfs) < "" Nov 28 05:10:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e269 e269: 6 total, 6 up, 6 in Nov 28 05:10:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v548: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 141 KiB/s wr, 15 op/s Nov 28 05:10:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e270 e270: 6 total, 6 up, 6 in Nov 28 05:10:23 localhost nova_compute[280168]: 2025-11-28 10:10:23.604 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 141 KiB/s wr, 15 op/s Nov 28 05:10:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "format": "json"}]: dispatch Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:24.310+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f' of type subvolume Nov 28 05:10:24 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f' of type subvolume Nov 28 05:10:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f", "force": true, "format": "json"}]: dispatch Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f'' moved to trashcan Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fd57705e-1bdf-4eac-bcf9-3bad5ad20c9f, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d8f7d257-b798-4b3d-88f0-c0bbfa330aa3", "format": "json"}]: dispatch Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:24.527+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8f7d257-b798-4b3d-88f0-c0bbfa330aa3' of type subvolume Nov 28 05:10:24 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd8f7d257-b798-4b3d-88f0-c0bbfa330aa3' of type subvolume Nov 28 05:10:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d8f7d257-b798-4b3d-88f0-c0bbfa330aa3", "force": true, "format": "json"}]: dispatch Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, vol_name:cephfs) < "" Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d8f7d257-b798-4b3d-88f0-c0bbfa330aa3'' moved to trashcan Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d8f7d257-b798-4b3d-88f0-c0bbfa330aa3, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:25 localhost nova_compute[280168]: 2025-11-28 10:10:25.365 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/.meta.tmp' Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/.meta.tmp' to config b'/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/.meta' Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "format": "json"}]: dispatch Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta' Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "format": "json"}]: dispatch Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:10:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v551: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 73 KiB/s wr, 6 op/s Nov 28 05:10:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:27 localhost openstack_network_exporter[240973]: ERROR 10:10:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:10:27 localhost openstack_network_exporter[240973]: ERROR 10:10:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:10:27 localhost openstack_network_exporter[240973]: ERROR 10:10:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:10:27 localhost openstack_network_exporter[240973]: ERROR 10:10:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:10:27 localhost openstack_network_exporter[240973]: Nov 28 05:10:27 localhost openstack_network_exporter[240973]: ERROR 10:10:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:10:27 localhost openstack_network_exporter[240973]: Nov 28 05:10:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b6df0b61-7d52-4368-906d-590e5295b08d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6df0b61-7d52-4368-906d-590e5295b08d, vol_name:cephfs) < "" Nov 28 05:10:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b6df0b61-7d52-4368-906d-590e5295b08d/.meta.tmp' Nov 28 05:10:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b6df0b61-7d52-4368-906d-590e5295b08d/.meta.tmp' to config b'/volumes/_nogroup/b6df0b61-7d52-4368-906d-590e5295b08d/.meta' Nov 28 05:10:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b6df0b61-7d52-4368-906d-590e5295b08d, vol_name:cephfs) < "" Nov 28 05:10:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b6df0b61-7d52-4368-906d-590e5295b08d", "format": "json"}]: dispatch Nov 28 05:10:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6df0b61-7d52-4368-906d-590e5295b08d, vol_name:cephfs) < "" Nov 28 05:10:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b6df0b61-7d52-4368-906d-590e5295b08d, vol_name:cephfs) < "" Nov 28 05:10:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 128 KiB/s wr, 11 op/s Nov 28 05:10:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/.meta.tmp' Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/.meta.tmp' to config b'/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/.meta' Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "format": "json"}]: dispatch Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:28 localhost nova_compute[280168]: 2025-11-28 10:10:28.605 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "snap_name": "cdc57bbb-b20f-407d-9c9a-d134937d596e", "format": "json"}]: dispatch Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:10:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:10:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:28 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:10:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:28 localhost podman[239012]: time="2025-11-28T10:10:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:10:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:28 localhost podman[239012]: @ - - [28/Nov/2025:10:10:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:10:28 localhost podman[239012]: @ - - [28/Nov/2025:10:10:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19248 "" "Go-http-client/1.1" Nov 28 05:10:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:29 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:29 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:29 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:10:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v553: 177 pgs: 177 active+clean; 199 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 862 B/s rd, 123 KiB/s wr, 11 op/s Nov 28 05:10:29 localhost podman[322556]: 2025-11-28 10:10:29.983199797 +0000 UTC m=+0.084557441 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Nov 28 05:10:30 localhost podman[322556]: 2025-11-28 10:10:30.001632746 +0000 UTC m=+0.102990390 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41) Nov 28 05:10:30 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:10:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "204dc313-c4b7-4b7f-a2b9-7d12dcbb771e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, vol_name:cephfs) < "" Nov 28 05:10:30 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/204dc313-c4b7-4b7f-a2b9-7d12dcbb771e/.meta.tmp' Nov 28 05:10:30 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/204dc313-c4b7-4b7f-a2b9-7d12dcbb771e/.meta.tmp' to config b'/volumes/_nogroup/204dc313-c4b7-4b7f-a2b9-7d12dcbb771e/.meta' Nov 28 05:10:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, vol_name:cephfs) < "" Nov 28 05:10:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "204dc313-c4b7-4b7f-a2b9-7d12dcbb771e", "format": "json"}]: dispatch Nov 28 05:10:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, vol_name:cephfs) < "" Nov 28 05:10:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, vol_name:cephfs) < "" Nov 28 05:10:30 localhost nova_compute[280168]: 2025-11-28 10:10:30.405 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:10:30 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:10:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:10:30 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:10:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:10:30 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev a8d100a9-a4f7-40c5-b992-9fa7b68fd68f (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:10:30 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev a8d100a9-a4f7-40c5-b992-9fa7b68fd68f (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:10:30 localhost ceph-mgr[286188]: [progress INFO root] Completed event a8d100a9-a4f7-40c5-b992-9fa7b68fd68f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:10:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:10:30 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:10:31 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:10:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:10:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 e271: 6 total, 6 up, 6 in Nov 28 05:10:31 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:10:31 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:10:31 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:10:31 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "auth_id": "tempest-cephx-id-254686751", "tenant_id": "a65552de119e4309a43e9e85b3f7e533", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:10:31 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-254686751, format:json, prefix:fs subvolume authorize, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, tenant_id:a65552de119e4309a43e9e85b3f7e533, vol_name:cephfs) < "" Nov 28 05:10:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-254686751", "format": "json"} v 0) Nov 28 05:10:31 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-254686751", "format": "json"} : dispatch Nov 28 05:10:31 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-254686751 with tenant a65552de119e4309a43e9e85b3f7e533 Nov 28 05:10:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-254686751", "caps": ["mds", "allow rw path=/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/4ce11f7d-42a8-42fc-8022-08399d2c2f15", "osd", "allow rw pool=manila_data namespace=fsvolumens_3c672c69-0cef-413e-aa2f-cebe487d9fad", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:31 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-254686751", "caps": ["mds", "allow rw path=/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/4ce11f7d-42a8-42fc-8022-08399d2c2f15", "osd", "allow rw pool=manila_data namespace=fsvolumens_3c672c69-0cef-413e-aa2f-cebe487d9fad", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:31 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-254686751, format:json, prefix:fs subvolume authorize, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, tenant_id:a65552de119e4309a43e9e85b3f7e533, vol_name:cephfs) < "" Nov 28 05:10:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 441 B/s rd, 100 KiB/s wr, 8 op/s Nov 28 05:10:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b6df0b61-7d52-4368-906d-590e5295b08d", "format": "json"}]: dispatch Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b6df0b61-7d52-4368-906d-590e5295b08d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b6df0b61-7d52-4368-906d-590e5295b08d, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:32.267+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6df0b61-7d52-4368-906d-590e5295b08d' of type subvolume Nov 28 05:10:32 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b6df0b61-7d52-4368-906d-590e5295b08d' of type subvolume Nov 28 05:10:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b6df0b61-7d52-4368-906d-590e5295b08d", "force": true, "format": "json"}]: dispatch Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6df0b61-7d52-4368-906d-590e5295b08d, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b6df0b61-7d52-4368-906d-590e5295b08d'' moved to trashcan Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b6df0b61-7d52-4368-906d-590e5295b08d, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "snap_name": "cdc57bbb-b20f-407d-9c9a-d134937d596e", "target_sub_name": "96f781a9-18ad-48a6-a288-5b831718a338", "format": "json"}]: dispatch Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, target_sub_name:96f781a9-18ad-48a6-a288-5b831718a338, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id d904628f-63ce-454f-9e8e-ced6a9a698f0 for path b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, target_sub_name:96f781a9-18ad-48a6-a288-5b831718a338, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338 Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, 96f781a9-18ad-48a6-a288-5b831718a338) Nov 28 05:10:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96f781a9-18ad-48a6-a288-5b831718a338", "format": "json"}]: dispatch Nov 28 05:10:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:96f781a9-18ad-48a6-a288-5b831718a338, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:32 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-254686751", "format": "json"} : dispatch Nov 28 05:10:32 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-254686751", "caps": ["mds", "allow rw path=/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/4ce11f7d-42a8-42fc-8022-08399d2c2f15", "osd", "allow rw pool=manila_data namespace=fsvolumens_3c672c69-0cef-413e-aa2f-cebe487d9fad", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:32 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-254686751", "caps": ["mds", "allow rw path=/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/4ce11f7d-42a8-42fc-8022-08399d2c2f15", "osd", "allow rw pool=manila_data namespace=fsvolumens_3c672c69-0cef-413e-aa2f-cebe487d9fad", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:32 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-254686751", "caps": ["mds", "allow rw path=/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/4ce11f7d-42a8-42fc-8022-08399d2c2f15", "osd", "allow rw pool=manila_data namespace=fsvolumens_3c672c69-0cef-413e-aa2f-cebe487d9fad", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:33 localhost nova_compute[280168]: 2025-11-28 10:10:33.608 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 93 KiB/s wr, 7 op/s Nov 28 05:10:34 localhost ovn_controller[152726]: 2025-11-28T10:10:34Z|00195|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, 96f781a9-18ad-48a6-a288-5b831718a338) -- by 0 seconds Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta' Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:96f781a9-18ad-48a6-a288-5b831718a338, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:35 localhost nova_compute[280168]: 2025-11-28 10:10:35.450 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:10:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:10:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 93 KiB/s wr, 7 op/s Nov 28 05:10:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v558: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 97 KiB/s wr, 8 op/s Nov 28 05:10:38 localhost nova_compute[280168]: 2025-11-28 10:10:38.611 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 97 KiB/s wr, 8 op/s Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.snap/cdc57bbb-b20f-407d-9c9a-d134937d596e/c18e34d9-3414-43d5-bf95-d8340acbe301' to b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/05021c8f-ee51-4139-ac5a-e9446a1dbb7b' Nov 28 05:10:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:10:40 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:40 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:10:40 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta' Nov 28 05:10:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.clone_index] untracking d904628f-63ce-454f-9e8e-ced6a9a698f0 Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta.tmp' to config b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338/.meta' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, 96f781a9-18ad-48a6-a288-5b831718a338) Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:40 localhost nova_compute[280168]: 2025-11-28 10:10:40.484 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:40 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:10:40 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:40 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:10:40 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:10:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "204dc313-c4b7-4b7f-a2b9-7d12dcbb771e", "format": "json"}]: dispatch Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:40.690+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '204dc313-c4b7-4b7f-a2b9-7d12dcbb771e' of type subvolume Nov 28 05:10:40 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '204dc313-c4b7-4b7f-a2b9-7d12dcbb771e' of type subvolume Nov 28 05:10:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "204dc313-c4b7-4b7f-a2b9-7d12dcbb771e", "force": true, "format": "json"}]: dispatch Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/204dc313-c4b7-4b7f-a2b9-7d12dcbb771e'' moved to trashcan Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:204dc313-c4b7-4b7f-a2b9-7d12dcbb771e, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "68e2b06e-b3f4-47c6-ba92-1f283cfd85db", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/68e2b06e-b3f4-47c6-ba92-1f283cfd85db/.meta.tmp' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/68e2b06e-b3f4-47c6-ba92-1f283cfd85db/.meta.tmp' to config b'/volumes/_nogroup/68e2b06e-b3f4-47c6-ba92-1f283cfd85db/.meta' Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "68e2b06e-b3f4-47c6-ba92-1f283cfd85db", "format": "json"}]: dispatch Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, vol_name:cephfs) < "" Nov 28 05:10:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "auth_id": "tempest-cephx-id-254686751", "format": "json"}]: dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-254686751, format:json, prefix:fs subvolume deauthorize, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-254686751", "format": "json"} v 0) Nov 28 05:10:41 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-254686751", "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-254686751"} v 0) Nov 28 05:10:41 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-254686751"} : dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-254686751, format:json, prefix:fs subvolume deauthorize, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "auth_id": "tempest-cephx-id-254686751", "format": "json"}]: dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-254686751, format:json, prefix:fs subvolume evict, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-254686751, client_metadata.root=/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad/4ce11f7d-42a8-42fc-8022-08399d2c2f15 Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-254686751, format:json, prefix:fs subvolume evict, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:10:41 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:10:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:41 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-254686751"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-254686751", "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-254686751"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-254686751"}]': finished Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "format": "json"}]: dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:41.869+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c672c69-0cef-413e-aa2f-cebe487d9fad' of type subvolume Nov 28 05:10:41 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c672c69-0cef-413e-aa2f-cebe487d9fad' of type subvolume Nov 28 05:10:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c672c69-0cef-413e-aa2f-cebe487d9fad", "force": true, "format": "json"}]: dispatch Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v560: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 695 B/s rd, 193 KiB/s wr, 16 op/s Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3c672c69-0cef-413e-aa2f-cebe487d9fad'' moved to trashcan Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c672c69-0cef-413e-aa2f-cebe487d9fad, vol_name:cephfs) < "" Nov 28 05:10:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9f812932-0755-4fbf-a6f3-aea9e3a38b58", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, vol_name:cephfs) < "" Nov 28 05:10:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9f812932-0755-4fbf-a6f3-aea9e3a38b58/.meta.tmp' Nov 28 05:10:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9f812932-0755-4fbf-a6f3-aea9e3a38b58/.meta.tmp' to config b'/volumes/_nogroup/9f812932-0755-4fbf-a6f3-aea9e3a38b58/.meta' Nov 28 05:10:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, vol_name:cephfs) < "" Nov 28 05:10:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9f812932-0755-4fbf-a6f3-aea9e3a38b58", "format": "json"}]: dispatch Nov 28 05:10:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, vol_name:cephfs) < "" Nov 28 05:10:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, vol_name:cephfs) < "" Nov 28 05:10:42 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:10:42 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:10:43 localhost podman[322658]: 2025-11-28 10:10:43.045488986 +0000 UTC m=+0.136569817 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:10:43 localhost podman[322658]: 2025-11-28 10:10:43.058518149 +0000 UTC m=+0.149599030 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:10:43 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:10:43 localhost systemd[1]: tmp-crun.PsOkRQ.mount: Deactivated successfully. Nov 28 05:10:43 localhost podman[322655]: 2025-11-28 10:10:43.109259255 +0000 UTC m=+0.206877348 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:10:43 localhost podman[322657]: 2025-11-28 10:10:43.010117314 +0000 UTC m=+0.103425994 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:10:43 localhost podman[322657]: 2025-11-28 10:10:43.145429422 +0000 UTC m=+0.238738142 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:10:43 localhost podman[322656]: 2025-11-28 10:10:43.155055048 +0000 UTC m=+0.249662938 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:10:43 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:10:43 localhost podman[322655]: 2025-11-28 10:10:43.176638375 +0000 UTC m=+0.274256478 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:10:43 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:10:43 localhost podman[322656]: 2025-11-28 10:10:43.203571756 +0000 UTC m=+0.298179696 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3) Nov 28 05:10:43 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:10:43 localhost nova_compute[280168]: 2025-11-28 10:10:43.614 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 596 B/s rd, 125 KiB/s wr, 10 op/s Nov 28 05:10:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4afb8271-18a6-4a0b-8cb1-7414aa7c5267", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, vol_name:cephfs) < "" Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4afb8271-18a6-4a0b-8cb1-7414aa7c5267/.meta.tmp' Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4afb8271-18a6-4a0b-8cb1-7414aa7c5267/.meta.tmp' to config b'/volumes/_nogroup/4afb8271-18a6-4a0b-8cb1-7414aa7c5267/.meta' Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, vol_name:cephfs) < "" Nov 28 05:10:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4afb8271-18a6-4a0b-8cb1-7414aa7c5267", "format": "json"}]: dispatch Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, vol_name:cephfs) < "" Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, vol_name:cephfs) < "" Nov 28 05:10:44 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:10:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:10:44 localhost podman[322741]: 2025-11-28 10:10:44.982690466 +0000 UTC m=+0.079208756 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta.tmp' Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta.tmp' to config b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta' Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:10:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "format": "json"}]: dispatch Nov 28 05:10:44 localhost podman[322741]: 2025-11-28 10:10:44.994865543 +0000 UTC m=+0.091383823 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:10:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:10:45 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:10:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:10:45 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:10:45 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:45 localhost nova_compute[280168]: 2025-11-28 10:10:45.522 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v562: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 125 KiB/s wr, 10 op/s Nov 28 05:10:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:10:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:10:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:10:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:10:46 localhost nova_compute[280168]: 2025-11-28 10:10:46.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9f812932-0755-4fbf-a6f3-aea9e3a38b58", "format": "json"}]: dispatch Nov 28 05:10:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:46.915+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f812932-0755-4fbf-a6f3-aea9e3a38b58' of type subvolume Nov 28 05:10:46 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9f812932-0755-4fbf-a6f3-aea9e3a38b58' of type subvolume Nov 28 05:10:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9f812932-0755-4fbf-a6f3-aea9e3a38b58", "force": true, "format": "json"}]: dispatch Nov 28 05:10:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, vol_name:cephfs) < "" Nov 28 05:10:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9f812932-0755-4fbf-a6f3-aea9e3a38b58'' moved to trashcan Nov 28 05:10:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9f812932-0755-4fbf-a6f3-aea9e3a38b58, vol_name:cephfs) < "" Nov 28 05:10:47 localhost nova_compute[280168]: 2025-11-28 10:10:47.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:47 localhost nova_compute[280168]: 2025-11-28 10:10:47.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:47 localhost nova_compute[280168]: 2025-11-28 10:10:47.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 05:10:47 localhost nova_compute[280168]: 2025-11-28 10:10:47.259 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 05:10:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4afb8271-18a6-4a0b-8cb1-7414aa7c5267", "format": "json"}]: dispatch Nov 28 05:10:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:47 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:47.737+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4afb8271-18a6-4a0b-8cb1-7414aa7c5267' of type subvolume Nov 28 05:10:47 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4afb8271-18a6-4a0b-8cb1-7414aa7c5267' of type subvolume Nov 28 05:10:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4afb8271-18a6-4a0b-8cb1-7414aa7c5267", "force": true, "format": "json"}]: dispatch Nov 28 05:10:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, vol_name:cephfs) < "" Nov 28 05:10:47 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4afb8271-18a6-4a0b-8cb1-7414aa7c5267'' moved to trashcan Nov 28 05:10:47 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4afb8271-18a6-4a0b-8cb1-7414aa7c5267, vol_name:cephfs) < "" Nov 28 05:10:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 167 KiB/s wr, 16 op/s Nov 28 05:10:48 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:10:48 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:10:48 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:10:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:48 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:48 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:48 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:48 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:48 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:48 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:48 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "snap_name": "feab8b53-fb10-403a-9e5e-ef10a5640c05", "format": "json"}]: dispatch Nov 28 05:10:48 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:feab8b53-fb10-403a-9e5e-ef10a5640c05, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:10:48 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:feab8b53-fb10-403a-9e5e-ef10a5640c05, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:10:48 localhost nova_compute[280168]: 2025-11-28 10:10:48.617 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:49 localhost nova_compute[280168]: 2025-11-28 10:10:49.257 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:49 localhost nova_compute[280168]: 2025-11-28 10:10:49.258 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:10:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:10:49.593 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:10:49 localhost nova_compute[280168]: 2025-11-28 10:10:49.594 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:10:49.595 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:10:49 localhost ovn_metadata_agent[158525]: 2025-11-28 10:10:49.595 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:10:49 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:10:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v564: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 128 KiB/s wr, 12 op/s Nov 28 05:10:49 localhost podman[322765]: 2025-11-28 10:10:49.956347802 +0000 UTC m=+0.067520895 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 05:10:49 localhost podman[322765]: 2025-11-28 10:10:49.99874276 +0000 UTC m=+0.109915793 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS) Nov 28 05:10:50 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:10:50 localhost nova_compute[280168]: 2025-11-28 10:10:50.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:50 localhost nova_compute[280168]: 2025-11-28 10:10:50.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:10:50 localhost nova_compute[280168]: 2025-11-28 10:10:50.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:10:50 localhost nova_compute[280168]: 2025-11-28 10:10:50.265 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:10:50 localhost nova_compute[280168]: 2025-11-28 10:10:50.265 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:50 localhost nova_compute[280168]: 2025-11-28 10:10:50.570 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:10:50.854 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:10:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:10:50.855 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:10:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:10:50.855 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:10:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "68e2b06e-b3f4-47c6-ba92-1f283cfd85db", "format": "json"}]: dispatch Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:50.967+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '68e2b06e-b3f4-47c6-ba92-1f283cfd85db' of type subvolume Nov 28 05:10:50 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '68e2b06e-b3f4-47c6-ba92-1f283cfd85db' of type subvolume Nov 28 05:10:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "68e2b06e-b3f4-47c6-ba92-1f283cfd85db", "force": true, "format": "json"}]: dispatch Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, vol_name:cephfs) < "" Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/68e2b06e-b3f4-47c6-ba92-1f283cfd85db'' moved to trashcan Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:68e2b06e-b3f4-47c6-ba92-1f283cfd85db, vol_name:cephfs) < "" Nov 28 05:10:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta.tmp' Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta.tmp' to config b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta' Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "format": "json"}]: dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:51 localhost nova_compute[280168]: 2025-11-28 10:10:51.260 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:10:51 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:10:51 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "df1f50b2-116f-4913-b966-9e6fb632edd2", "format": "json"}]: dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:df1f50b2-116f-4913-b966-9e6fb632edd2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:df1f50b2-116f-4913-b966-9e6fb632edd2, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:51.681+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df1f50b2-116f-4913-b966-9e6fb632edd2' of type subvolume Nov 28 05:10:51 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'df1f50b2-116f-4913-b966-9e6fb632edd2' of type subvolume Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "df1f50b2-116f-4913-b966-9e6fb632edd2", "force": true, "format": "json"}]: dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df1f50b2-116f-4913-b966-9e6fb632edd2, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/df1f50b2-116f-4913-b966-9e6fb632edd2'' moved to trashcan Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:df1f50b2-116f-4913-b966-9e6fb632edd2, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "619d1399-3e67-47c1-b13e-c8a98f88c137", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:619d1399-3e67-47c1-b13e-c8a98f88c137, vol_name:cephfs) < "" Nov 28 05:10:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 205 KiB/s wr, 18 op/s Nov 28 05:10:52 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/619d1399-3e67-47c1-b13e-c8a98f88c137/.meta.tmp' Nov 28 05:10:52 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/619d1399-3e67-47c1-b13e-c8a98f88c137/.meta.tmp' to config b'/volumes/_nogroup/619d1399-3e67-47c1-b13e-c8a98f88c137/.meta' Nov 28 05:10:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:619d1399-3e67-47c1-b13e-c8a98f88c137, vol_name:cephfs) < "" Nov 28 05:10:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "619d1399-3e67-47c1-b13e-c8a98f88c137", "format": "json"}]: dispatch Nov 28 05:10:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:619d1399-3e67-47c1-b13e-c8a98f88c137, vol_name:cephfs) < "" Nov 28 05:10:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:619d1399-3e67-47c1-b13e-c8a98f88c137, vol_name:cephfs) < "" Nov 28 05:10:52 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:10:52 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:52 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:10:52 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.264 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.264 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.265 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.265 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.266 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.620 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:10:53 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1858069606' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.733 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:10:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 120 KiB/s wr, 11 op/s Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.944 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.945 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11487MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.945 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:10:53 localhost nova_compute[280168]: 2025-11-28 10:10:53.946 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.184 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.185 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:10:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "snap_name": "8728644f-8752-4fbc-9dba-996bfb595ace", "format": "json"}]: dispatch Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8728644f-8752-4fbc-9dba-996bfb595ace, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.379 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:8728644f-8752-4fbc-9dba-996bfb595ace, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f288b089-d265-4559-9acd-a03615016513", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f288b089-d265-4559-9acd-a03615016513, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f288b089-d265-4559-9acd-a03615016513/.meta.tmp' Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f288b089-d265-4559-9acd-a03615016513/.meta.tmp' to config b'/volumes/_nogroup/f288b089-d265-4559-9acd-a03615016513/.meta' Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f288b089-d265-4559-9acd-a03615016513, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f288b089-d265-4559-9acd-a03615016513", "format": "json"}]: dispatch Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f288b089-d265-4559-9acd-a03615016513, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f288b089-d265-4559-9acd-a03615016513, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/.meta.tmp' Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/.meta.tmp' to config b'/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/.meta' Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "format": "json"}]: dispatch Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:10:54 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2557878891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.890 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.895 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.910 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.911 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.912 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:10:54 localhost nova_compute[280168]: 2025-11-28 10:10:54.912 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:10:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:54 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:10:54 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:54 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:10:55 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:10:55 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:10:55 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:55 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:55 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:10:55 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:10:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "619d1399-3e67-47c1-b13e-c8a98f88c137", "format": "json"}]: dispatch Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:619d1399-3e67-47c1-b13e-c8a98f88c137, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:619d1399-3e67-47c1-b13e-c8a98f88c137, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:55 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:55.310+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '619d1399-3e67-47c1-b13e-c8a98f88c137' of type subvolume Nov 28 05:10:55 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '619d1399-3e67-47c1-b13e-c8a98f88c137' of type subvolume Nov 28 05:10:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "619d1399-3e67-47c1-b13e-c8a98f88c137", "force": true, "format": "json"}]: dispatch Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:619d1399-3e67-47c1-b13e-c8a98f88c137, vol_name:cephfs) < "" Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/619d1399-3e67-47c1-b13e-c8a98f88c137'' moved to trashcan Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:619d1399-3e67-47c1-b13e-c8a98f88c137, vol_name:cephfs) < "" Nov 28 05:10:55 localhost nova_compute[280168]: 2025-11-28 10:10:55.623 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 120 KiB/s wr, 11 op/s Nov 28 05:10:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:10:56 localhost nova_compute[280168]: 2025-11-28 10:10:56.925 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:10:57 localhost openstack_network_exporter[240973]: ERROR 10:10:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:10:57 localhost openstack_network_exporter[240973]: ERROR 10:10:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:10:57 localhost openstack_network_exporter[240973]: ERROR 10:10:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:10:57 localhost openstack_network_exporter[240973]: ERROR 10:10:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:10:57 localhost openstack_network_exporter[240973]: Nov 28 05:10:57 localhost openstack_network_exporter[240973]: ERROR 10:10:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:10:57 localhost openstack_network_exporter[240973]: Nov 28 05:10:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f288b089-d265-4559-9acd-a03615016513", "format": "json"}]: dispatch Nov 28 05:10:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f288b089-d265-4559-9acd-a03615016513, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f288b089-d265-4559-9acd-a03615016513, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:10:57 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f288b089-d265-4559-9acd-a03615016513' of type subvolume Nov 28 05:10:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:10:57.694+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f288b089-d265-4559-9acd-a03615016513' of type subvolume Nov 28 05:10:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f288b089-d265-4559-9acd-a03615016513", "force": true, "format": "json"}]: dispatch Nov 28 05:10:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f288b089-d265-4559-9acd-a03615016513, vol_name:cephfs) < "" Nov 28 05:10:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f288b089-d265-4559-9acd-a03615016513'' moved to trashcan Nov 28 05:10:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:10:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f288b089-d265-4559-9acd-a03615016513, vol_name:cephfs) < "" Nov 28 05:10:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v568: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 171 KiB/s wr, 16 op/s Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "snap_name": "8728644f-8752-4fbc-9dba-996bfb595ace_609d618f-0ee5-41e8-beae-89d616159cd4", "force": true, "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8728644f-8752-4fbc-9dba-996bfb595ace_609d618f-0ee5-41e8-beae-89d616159cd4, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta.tmp' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta.tmp' to config b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8728644f-8752-4fbc-9dba-996bfb595ace_609d618f-0ee5-41e8-beae-89d616159cd4, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "snap_name": "8728644f-8752-4fbc-9dba-996bfb595ace", "force": true, "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8728644f-8752-4fbc-9dba-996bfb595ace, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta.tmp' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta.tmp' to config b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95/.meta' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:8728644f-8752-4fbc-9dba-996bfb595ace, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2. Nov 28 05:10:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:10:58 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:10:58 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:10:58 localhost nova_compute[280168]: 2025-11-28 10:10:58.622 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/.meta.tmp' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/.meta.tmp' to config b'/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/.meta' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "47d6dddf-401e-4ff2-980a-f34a0aa62099", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/47d6dddf-401e-4ff2-980a-f34a0aa62099/.meta.tmp' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/47d6dddf-401e-4ff2-980a-f34a0aa62099/.meta.tmp' to config b'/volumes/_nogroup/47d6dddf-401e-4ff2-980a-f34a0aa62099/.meta' Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, vol_name:cephfs) < "" Nov 28 05:10:58 localhost podman[239012]: time="2025-11-28T10:10:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:10:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "47d6dddf-401e-4ff2-980a-f34a0aa62099", "format": "json"}]: dispatch Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, vol_name:cephfs) < "" Nov 28 05:10:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, vol_name:cephfs) < "" Nov 28 05:10:58 localhost podman[239012]: @ - - [28/Nov/2025:10:10:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:10:58 localhost podman[239012]: @ - - [28/Nov/2025:10:10:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19241 "" "Go-http-client/1.1" Nov 28 05:10:59 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:10:59 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:10:59 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:10:59 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:10:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 128 KiB/s wr, 10 op/s Nov 28 05:11:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e272 e272: 6 total, 6 up, 6 in Nov 28 05:11:00 localhost nova_compute[280168]: 2025-11-28 10:11:00.646 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:00 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:11:00 localhost podman[322831]: 2025-11-28 10:11:00.981487083 +0000 UTC m=+0.092488016 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git) Nov 28 05:11:00 localhost podman[322831]: 2025-11-28 10:11:00.992805953 +0000 UTC m=+0.103806916 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 05:11:01 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:11:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "format": "json"}]: dispatch Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:01 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:01.475+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e15fa73c-5a04-4afe-898a-d761ebf88b95' of type subvolume Nov 28 05:11:01 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e15fa73c-5a04-4afe-898a-d761ebf88b95' of type subvolume Nov 28 05:11:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e15fa73c-5a04-4afe-898a-d761ebf88b95", "force": true, "format": "json"}]: dispatch Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e15fa73c-5a04-4afe-898a-d761ebf88b95'' moved to trashcan Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e15fa73c-5a04-4afe-898a-d761ebf88b95, vol_name:cephfs) < "" Nov 28 05:11:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:01 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:01 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:01 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 159 KiB/s wr, 14 op/s Nov 28 05:11:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:02 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/6ecb968c-6419-4825-9a25-21daec62c92e", "osd", "allow rw pool=manila_data namespace=fsvolumens_9342293c-12c1-4a10-bd1e-8eba9e15cd79", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:02 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/6ecb968c-6419-4825-9a25-21daec62c92e", "osd", "allow rw pool=manila_data namespace=fsvolumens_9342293c-12c1-4a10-bd1e-8eba9e15cd79", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "47d6dddf-401e-4ff2-980a-f34a0aa62099", "format": "json"}]: dispatch Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:02.214+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '47d6dddf-401e-4ff2-980a-f34a0aa62099' of type subvolume Nov 28 05:11:02 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '47d6dddf-401e-4ff2-980a-f34a0aa62099' of type subvolume Nov 28 05:11:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "47d6dddf-401e-4ff2-980a-f34a0aa62099", "force": true, "format": "json"}]: dispatch Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, vol_name:cephfs) < "" Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/47d6dddf-401e-4ff2-980a-f34a0aa62099'' moved to trashcan Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:47d6dddf-401e-4ff2-980a-f34a0aa62099, vol_name:cephfs) < "" Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/6ecb968c-6419-4825-9a25-21daec62c92e", "osd", "allow rw pool=manila_data namespace=fsvolumens_9342293c-12c1-4a10-bd1e-8eba9e15cd79", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/6ecb968c-6419-4825-9a25-21daec62c92e", "osd", "allow rw pool=manila_data namespace=fsvolumens_9342293c-12c1-4a10-bd1e-8eba9e15cd79", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:02 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/6ecb968c-6419-4825-9a25-21daec62c92e", "osd", "allow rw pool=manila_data namespace=fsvolumens_9342293c-12c1-4a10-bd1e-8eba9e15cd79", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96f781a9-18ad-48a6-a288-5b831718a338", "format": "json"}]: dispatch Nov 28 05:11:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:96f781a9-18ad-48a6-a288-5b831718a338, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:03 localhost nova_compute[280168]: 2025-11-28 10:11:03.624 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v572: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 159 KiB/s wr, 14 op/s Nov 28 05:11:04 localhost nova_compute[280168]: 2025-11-28 10:11:04.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:04 localhost nova_compute[280168]: 2025-11-28 10:11:04.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:96f781a9-18ad-48a6-a288-5b831718a338, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "96f781a9-18ad-48a6-a288-5b831718a338", "format": "json"}]: dispatch Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96f781a9-18ad-48a6-a288-5b831718a338, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:96f781a9-18ad-48a6-a288-5b831718a338, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fef85cbb-0b69-4eb4-8353-b0bae82d0d83", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fef85cbb-0b69-4eb4-8353-b0bae82d0d83/.meta.tmp' Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fef85cbb-0b69-4eb4-8353-b0bae82d0d83/.meta.tmp' to config b'/volumes/_nogroup/fef85cbb-0b69-4eb4-8353-b0bae82d0d83/.meta' Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fef85cbb-0b69-4eb4-8353-b0bae82d0d83", "format": "json"}]: dispatch Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:11:05 Nov 28 05:11:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:11:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:11:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_metadata', 'manila_data', '.mgr', 'volumes', 'images', 'vms', 'backups'] Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:11:05 localhost nova_compute[280168]: 2025-11-28 10:11:05.686 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:05 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:11:05 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:11:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:11:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v573: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 159 KiB/s wr, 14 op/s Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.9989356504745952e-06 of space, bias 1.0, pg target 0.0005967881944444444 quantized to 32 (current 32) Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:11:05 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0008502436953284943 of space, bias 4.0, pg target 0.6767939814814814 quantized to 16 (current 16) Nov 28 05:11:05 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:11:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:11:06 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:06 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:06 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:06 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:11:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "eb97c54f-5603-4ed1-8403-ba162352ea4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/eb97c54f-5603-4ed1-8403-ba162352ea4f/.meta.tmp' Nov 28 05:11:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/eb97c54f-5603-4ed1-8403-ba162352ea4f/.meta.tmp' to config b'/volumes/_nogroup/eb97c54f-5603-4ed1-8403-ba162352ea4f/.meta' Nov 28 05:11:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "eb97c54f-5603-4ed1-8403-ba162352ea4f", "format": "json"}]: dispatch Nov 28 05:11:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e273 e273: 6 total, 6 up, 6 in Nov 28 05:11:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:07 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:07 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79/6ecb968c-6419-4825-9a25-21daec62c92e Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:07 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:07 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "format": "json"}]: dispatch Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:07.644+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9342293c-12c1-4a10-bd1e-8eba9e15cd79' of type subvolume Nov 28 05:11:07 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9342293c-12c1-4a10-bd1e-8eba9e15cd79' of type subvolume Nov 28 05:11:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9342293c-12c1-4a10-bd1e-8eba9e15cd79", "force": true, "format": "json"}]: dispatch Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9342293c-12c1-4a10-bd1e-8eba9e15cd79'' moved to trashcan Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9342293c-12c1-4a10-bd1e-8eba9e15cd79, vol_name:cephfs) < "" Nov 28 05:11:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 199 KiB/s wr, 18 op/s Nov 28 05:11:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:08 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:08 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:08 localhost nova_compute[280168]: 2025-11-28 10:11:08.625 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:08 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:08 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/.meta.tmp' Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/.meta.tmp' to config b'/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/.meta' Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "format": "json"}]: dispatch Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fef85cbb-0b69-4eb4-8353-b0bae82d0d83", "format": "json"}]: dispatch Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:08.980+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fef85cbb-0b69-4eb4-8353-b0bae82d0d83' of type subvolume Nov 28 05:11:08 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fef85cbb-0b69-4eb4-8353-b0bae82d0d83' of type subvolume Nov 28 05:11:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fef85cbb-0b69-4eb4-8353-b0bae82d0d83", "force": true, "format": "json"}]: dispatch Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, vol_name:cephfs) < "" Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fef85cbb-0b69-4eb4-8353-b0bae82d0d83'' moved to trashcan Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fef85cbb-0b69-4eb4-8353-b0bae82d0d83, vol_name:cephfs) < "" Nov 28 05:11:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "eb97c54f-5603-4ed1-8403-ba162352ea4f", "new_size": 2147483648, "format": "json"}]: dispatch Nov 28 05:11:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v576: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 167 KiB/s wr, 15 op/s Nov 28 05:11:10 localhost nova_compute[280168]: 2025-11-28 10:11:10.737 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 142 KiB/s wr, 13 op/s Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/d4c11bd3-8d93-41b4-9ea4-9848a75f8c7c", "osd", "allow rw pool=manila_data namespace=fsvolumens_1d0b6ffa-5038-4546-af4f-2ad9a9443222", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/d4c11bd3-8d93-41b4-9ea4-9848a75f8c7c", "osd", "allow rw pool=manila_data namespace=fsvolumens_1d0b6ffa-5038-4546-af4f-2ad9a9443222", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:11:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "eb97c54f-5603-4ed1-8403-ba162352ea4f", "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eb97c54f-5603-4ed1-8403-ba162352ea4f' of type subvolume Nov 28 05:11:12 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:12.449+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'eb97c54f-5603-4ed1-8403-ba162352ea4f' of type subvolume Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "eb97c54f-5603-4ed1-8403-ba162352ea4f", "force": true, "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/eb97c54f-5603-4ed1-8403-ba162352ea4f'' moved to trashcan Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:eb97c54f-5603-4ed1-8403-ba162352ea4f, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "0b7976e6-bdd9-4983-8639-f0b5b8a68920", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/0b7976e6-bdd9-4983-8639-f0b5b8a68920/.meta.tmp' Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/0b7976e6-bdd9-4983-8639-f0b5b8a68920/.meta.tmp' to config b'/volumes/_nogroup/0b7976e6-bdd9-4983-8639-f0b5b8a68920/.meta' Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "0b7976e6-bdd9-4983-8639-f0b5b8a68920", "format": "json"}]: dispatch Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, vol_name:cephfs) < "" Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/d4c11bd3-8d93-41b4-9ea4-9848a75f8c7c", "osd", "allow rw pool=manila_data namespace=fsvolumens_1d0b6ffa-5038-4546-af4f-2ad9a9443222", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/d4c11bd3-8d93-41b4-9ea4-9848a75f8c7c", "osd", "allow rw pool=manila_data namespace=fsvolumens_1d0b6ffa-5038-4546-af4f-2ad9a9443222", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/d4c11bd3-8d93-41b4-9ea4-9848a75f8c7c", "osd", "allow rw pool=manila_data namespace=fsvolumens_1d0b6ffa-5038-4546-af4f-2ad9a9443222", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:12 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:11:13 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:11:13 localhost nova_compute[280168]: 2025-11-28 10:11:13.628 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:13 localhost podman[322855]: 2025-11-28 10:11:13.633390465 +0000 UTC m=+0.074926774 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible) Nov 28 05:11:13 localhost systemd[1]: tmp-crun.O6uj6M.mount: Deactivated successfully. Nov 28 05:11:13 localhost podman[322866]: 2025-11-28 10:11:13.690808517 +0000 UTC m=+0.119209530 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:11:13 localhost podman[322855]: 2025-11-28 10:11:13.701480727 +0000 UTC m=+0.143017036 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 05:11:13 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:11:13 localhost podman[322856]: 2025-11-28 10:11:13.665482436 +0000 UTC m=+0.096996575 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0) Nov 28 05:11:13 localhost podman[322854]: 2025-11-28 10:11:13.746957191 +0000 UTC m=+0.186241210 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:11:13 localhost podman[322856]: 2025-11-28 10:11:13.750724508 +0000 UTC m=+0.182238677 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:11:13 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:11:13 localhost podman[322866]: 2025-11-28 10:11:13.774443289 +0000 UTC m=+0.202844312 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:11:13 localhost podman[322854]: 2025-11-28 10:11:13.785441289 +0000 UTC m=+0.224725298 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm) Nov 28 05:11:13 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:11:13 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:11:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v578: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 142 KiB/s wr, 13 op/s Nov 28 05:11:15 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:15 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:11:15 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:15 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:15 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:15 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:15 localhost nova_compute[280168]: 2025-11-28 10:11:15.768 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:15 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:15 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:15 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:15 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:15 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:11:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v579: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 142 KiB/s wr, 13 op/s Nov 28 05:11:15 localhost podman[322935]: 2025-11-28 10:11:15.981564063 +0000 UTC m=+0.087392229 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:11:15 localhost podman[322935]: 2025-11-28 10:11:15.990287342 +0000 UTC m=+0.096115498 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:11:16 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:11:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "0b7976e6-bdd9-4983-8639-f0b5b8a68920", "format": "json"}]: dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b7976e6-bdd9-4983-8639-f0b5b8a68920' of type subvolume Nov 28 05:11:16 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:16.102+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '0b7976e6-bdd9-4983-8639-f0b5b8a68920' of type subvolume Nov 28 05:11:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "0b7976e6-bdd9-4983-8639-f0b5b8a68920", "force": true, "format": "json"}]: dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/0b7976e6-bdd9-4983-8639-f0b5b8a68920'' moved to trashcan Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:0b7976e6-bdd9-4983-8639-f0b5b8a68920, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:16 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:16 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222/d4c11bd3-8d93-41b4-9ea4-9848a75f8c7c Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "format": "json"}]: dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d0b6ffa-5038-4546-af4f-2ad9a9443222' of type subvolume Nov 28 05:11:16 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:16.530+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1d0b6ffa-5038-4546-af4f-2ad9a9443222' of type subvolume Nov 28 05:11:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1d0b6ffa-5038-4546-af4f-2ad9a9443222", "force": true, "format": "json"}]: dispatch Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1d0b6ffa-5038-4546-af4f-2ad9a9443222'' moved to trashcan Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1d0b6ffa-5038-4546-af4f-2ad9a9443222, vol_name:cephfs) < "" Nov 28 05:11:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:16 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:16 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 640 B/s rd, 170 KiB/s wr, 14 op/s Nov 28 05:11:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "671f3f7b-b41c-49c8-8917-acdf4c0a35f5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/671f3f7b-b41c-49c8-8917-acdf4c0a35f5/.meta.tmp' Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/671f3f7b-b41c-49c8-8917-acdf4c0a35f5/.meta.tmp' to config b'/volumes/_nogroup/671f3f7b-b41c-49c8-8917-acdf4c0a35f5/.meta' Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "671f3f7b-b41c-49c8-8917-acdf4c0a35f5", "format": "json"}]: dispatch Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:18 localhost nova_compute[280168]: 2025-11-28 10:11:18.629 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:11:18 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:18 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:11:18 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:19 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:19 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:19 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:11:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/.meta.tmp' Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/.meta.tmp' to config b'/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/.meta' Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "format": "json"}]: dispatch Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "09b6c71b-6f5f-4037-840b-757a404b81fe", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09b6c71b-6f5f-4037-840b-757a404b81fe, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/09b6c71b-6f5f-4037-840b-757a404b81fe/.meta.tmp' Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/09b6c71b-6f5f-4037-840b-757a404b81fe/.meta.tmp' to config b'/volumes/_nogroup/09b6c71b-6f5f-4037-840b-757a404b81fe/.meta' Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:09b6c71b-6f5f-4037-840b-757a404b81fe, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "09b6c71b-6f5f-4037-840b-757a404b81fe", "format": "json"}]: dispatch Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09b6c71b-6f5f-4037-840b-757a404b81fe, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:09b6c71b-6f5f-4037-840b-757a404b81fe, vol_name:cephfs) < "" Nov 28 05:11:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v581: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 158 KiB/s wr, 13 op/s Nov 28 05:11:20 localhost nova_compute[280168]: 2025-11-28 10:11:20.798 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:20 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:11:20 localhost podman[322959]: 2025-11-28 10:11:20.974277816 +0000 UTC m=+0.081567890 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd) Nov 28 05:11:20 localhost podman[322959]: 2025-11-28 10:11:20.984952255 +0000 UTC m=+0.092242339 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:11:20 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:11:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "671f3f7b-b41c-49c8-8917-acdf4c0a35f5", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch Nov 28 05:11:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 209 KiB/s wr, 18 op/s Nov 28 05:11:22 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:22 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:22 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/48e67baa-974d-4f75-aae2-5962f27bbd3a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1daffb16-bcf6-4808-941d-7da7540d99dc", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/48e67baa-974d-4f75-aae2-5962f27bbd3a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1daffb16-bcf6-4808-941d-7da7540d99dc", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:22 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:22 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:11:22 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:11:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:22 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:22 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "09b6c71b-6f5f-4037-840b-757a404b81fe", "format": "json"}]: dispatch Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:09b6c71b-6f5f-4037-840b-757a404b81fe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:09b6c71b-6f5f-4037-840b-757a404b81fe, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:23.030+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09b6c71b-6f5f-4037-840b-757a404b81fe' of type subvolume Nov 28 05:11:23 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '09b6c71b-6f5f-4037-840b-757a404b81fe' of type subvolume Nov 28 05:11:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "09b6c71b-6f5f-4037-840b-757a404b81fe", "force": true, "format": "json"}]: dispatch Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09b6c71b-6f5f-4037-840b-757a404b81fe, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/09b6c71b-6f5f-4037-840b-757a404b81fe'' moved to trashcan Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:09b6c71b-6f5f-4037-840b-757a404b81fe, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ea36305f-cfab-4e87-868a-2ee1b584f374", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea36305f-cfab-4e87-868a-2ee1b584f374, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/48e67baa-974d-4f75-aae2-5962f27bbd3a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1daffb16-bcf6-4808-941d-7da7540d99dc", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/48e67baa-974d-4f75-aae2-5962f27bbd3a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1daffb16-bcf6-4808-941d-7da7540d99dc", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/48e67baa-974d-4f75-aae2-5962f27bbd3a", "osd", "allow rw pool=manila_data namespace=fsvolumens_1daffb16-bcf6-4808-941d-7da7540d99dc", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ea36305f-cfab-4e87-868a-2ee1b584f374/.meta.tmp' Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ea36305f-cfab-4e87-868a-2ee1b584f374/.meta.tmp' to config b'/volumes/_nogroup/ea36305f-cfab-4e87-868a-2ee1b584f374/.meta' Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ea36305f-cfab-4e87-868a-2ee1b584f374, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ea36305f-cfab-4e87-868a-2ee1b584f374", "format": "json"}]: dispatch Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea36305f-cfab-4e87-868a-2ee1b584f374, vol_name:cephfs) < "" Nov 28 05:11:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ea36305f-cfab-4e87-868a-2ee1b584f374, vol_name:cephfs) < "" Nov 28 05:11:23 localhost nova_compute[280168]: 2025-11-28 10:11:23.632 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v583: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 143 KiB/s wr, 12 op/s Nov 28 05:11:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "671f3f7b-b41c-49c8-8917-acdf4c0a35f5", "format": "json"}]: dispatch Nov 28 05:11:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:24 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:24.622+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '671f3f7b-b41c-49c8-8917-acdf4c0a35f5' of type subvolume Nov 28 05:11:24 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '671f3f7b-b41c-49c8-8917-acdf4c0a35f5' of type subvolume Nov 28 05:11:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "671f3f7b-b41c-49c8-8917-acdf4c0a35f5", "force": true, "format": "json"}]: dispatch Nov 28 05:11:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/671f3f7b-b41c-49c8-8917-acdf4c0a35f5'' moved to trashcan Nov 28 05:11:24 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:671f3f7b-b41c-49c8-8917-acdf4c0a35f5, vol_name:cephfs) < "" Nov 28 05:11:25 localhost nova_compute[280168]: 2025-11-28 10:11:25.833 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:11:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 143 KiB/s wr, 12 op/s Nov 28 05:11:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:11:25 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:25 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:11:25 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:26 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:26 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:26 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:26 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:26 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc/48e67baa-974d-4f75-aae2-5962f27bbd3a Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "snap_name": "feab8b53-fb10-403a-9e5e-ef10a5640c05_4a989d3d-660f-4fdb-879a-341eac23ba00", "force": true, "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:feab8b53-fb10-403a-9e5e-ef10a5640c05_4a989d3d-660f-4fdb-879a-341eac23ba00, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta.tmp' Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta.tmp' to config b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta' Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:feab8b53-fb10-403a-9e5e-ef10a5640c05_4a989d3d-660f-4fdb-879a-341eac23ba00, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "snap_name": "feab8b53-fb10-403a-9e5e-ef10a5640c05", "force": true, "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:feab8b53-fb10-403a-9e5e-ef10a5640c05, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta.tmp' Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta.tmp' to config b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68/.meta' Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:feab8b53-fb10-403a-9e5e-ef10a5640c05, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1daffb16-bcf6-4808-941d-7da7540d99dc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1daffb16-bcf6-4808-941d-7da7540d99dc, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:26.695+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1daffb16-bcf6-4808-941d-7da7540d99dc' of type subvolume Nov 28 05:11:26 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1daffb16-bcf6-4808-941d-7da7540d99dc' of type subvolume Nov 28 05:11:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1daffb16-bcf6-4808-941d-7da7540d99dc", "force": true, "format": "json"}]: dispatch Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1daffb16-bcf6-4808-941d-7da7540d99dc'' moved to trashcan Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1daffb16-bcf6-4808-941d-7da7540d99dc, vol_name:cephfs) < "" Nov 28 05:11:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ea36305f-cfab-4e87-868a-2ee1b584f374", "format": "json"}]: dispatch Nov 28 05:11:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ea36305f-cfab-4e87-868a-2ee1b584f374, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:27 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:27 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ea36305f-cfab-4e87-868a-2ee1b584f374, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:27 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:27.143+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea36305f-cfab-4e87-868a-2ee1b584f374' of type subvolume Nov 28 05:11:27 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ea36305f-cfab-4e87-868a-2ee1b584f374' of type subvolume Nov 28 05:11:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ea36305f-cfab-4e87-868a-2ee1b584f374", "force": true, "format": "json"}]: dispatch Nov 28 05:11:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea36305f-cfab-4e87-868a-2ee1b584f374, vol_name:cephfs) < "" Nov 28 05:11:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ea36305f-cfab-4e87-868a-2ee1b584f374'' moved to trashcan Nov 28 05:11:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ea36305f-cfab-4e87-868a-2ee1b584f374, vol_name:cephfs) < "" Nov 28 05:11:27 localhost openstack_network_exporter[240973]: ERROR 10:11:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:11:27 localhost openstack_network_exporter[240973]: ERROR 10:11:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:11:27 localhost openstack_network_exporter[240973]: ERROR 10:11:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:11:27 localhost openstack_network_exporter[240973]: ERROR 10:11:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:11:27 localhost openstack_network_exporter[240973]: Nov 28 05:11:27 localhost openstack_network_exporter[240973]: ERROR 10:11:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:11:27 localhost openstack_network_exporter[240973]: Nov 28 05:11:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v585: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 241 KiB/s wr, 20 op/s Nov 28 05:11:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac978b2f-998a-4925-a071-fbc679b9c4b6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, vol_name:cephfs) < "" Nov 28 05:11:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ac978b2f-998a-4925-a071-fbc679b9c4b6/.meta.tmp' Nov 28 05:11:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ac978b2f-998a-4925-a071-fbc679b9c4b6/.meta.tmp' to config b'/volumes/_nogroup/ac978b2f-998a-4925-a071-fbc679b9c4b6/.meta' Nov 28 05:11:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, vol_name:cephfs) < "" Nov 28 05:11:28 localhost nova_compute[280168]: 2025-11-28 10:11:28.633 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac978b2f-998a-4925-a071-fbc679b9c4b6", "format": "json"}]: dispatch Nov 28 05:11:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, vol_name:cephfs) < "" Nov 28 05:11:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, vol_name:cephfs) < "" Nov 28 05:11:28 localhost podman[239012]: time="2025-11-28T10:11:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:11:28 localhost podman[239012]: @ - - [28/Nov/2025:10:11:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:11:28 localhost podman[239012]: @ - - [28/Nov/2025:10:11:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19245 "" "Go-http-client/1.1" Nov 28 05:11:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:11:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "format": "json"}]: dispatch Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9861e523-796e-4848-a7e0-e4ce88058d68, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9861e523-796e-4848-a7e0-e4ce88058d68, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:29.855+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9861e523-796e-4848-a7e0-e4ce88058d68' of type subvolume Nov 28 05:11:29 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9861e523-796e-4848-a7e0-e4ce88058d68' of type subvolume Nov 28 05:11:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9861e523-796e-4848-a7e0-e4ce88058d68", "force": true, "format": "json"}]: dispatch Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9861e523-796e-4848-a7e0-e4ce88058d68'' moved to trashcan Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9861e523-796e-4848-a7e0-e4ce88058d68, vol_name:cephfs) < "" Nov 28 05:11:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 149 KiB/s wr, 13 op/s Nov 28 05:11:30 localhost ovn_metadata_agent[158525]: 2025-11-28 10:11:30.070 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:11:30 localhost ovn_metadata_agent[158525]: 2025-11-28 10:11:30.072 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:11:30 localhost nova_compute[280168]: 2025-11-28 10:11:30.071 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:30 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0. Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.231206) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324690231251, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2358, "num_deletes": 255, "total_data_size": 3139472, "memory_usage": 3192704, "flush_reason": "Manual Compaction"} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324690248709, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 2054274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27675, "largest_seqno": 30028, "table_properties": {"data_size": 2044761, "index_size": 5574, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25919, "raw_average_key_size": 22, "raw_value_size": 2023509, "raw_average_value_size": 1764, "num_data_blocks": 240, "num_entries": 1147, "num_filter_entries": 1147, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324591, "oldest_key_time": 1764324591, "file_creation_time": 1764324690, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 17557 microseconds, and 6603 cpu microseconds. Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.248759) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 2054274 bytes OK Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.248787) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.251202) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.251226) EVENT_LOG_v1 {"time_micros": 1764324690251220, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.251249) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3127799, prev total WAL file size 3127799, number of live WAL files 2. Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.252329) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end) Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2006KB)], [42(17MB)] Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324690252402, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 20441057, "oldest_snapshot_seqno": -1} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 13960 keys, 18920322 bytes, temperature: kUnknown Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324690394984, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 18920322, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18839438, "index_size": 44879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34949, "raw_key_size": 372407, "raw_average_key_size": 26, "raw_value_size": 18601287, "raw_average_value_size": 1332, "num_data_blocks": 1690, "num_entries": 13960, "num_filter_entries": 13960, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324690, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.395357) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 18920322 bytes Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.397605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 132.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 17.5 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(19.2) write-amplify(9.2) OK, records in: 14497, records dropped: 537 output_compression: NoCompression Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.397633) EVENT_LOG_v1 {"time_micros": 1764324690397620, "job": 24, "event": "compaction_finished", "compaction_time_micros": 142745, "compaction_time_cpu_micros": 52295, "output_level": 6, "num_output_files": 1, "total_output_size": 18920322, "num_input_records": 14497, "num_output_records": 13960, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324690398034, "job": 24, "event": "table_file_deletion", "file_number": 44} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324690400473, "job": 24, "event": "table_file_deletion", "file_number": 42} Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.252181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.400597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.400604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.400608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.400611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:30 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:30.400613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "783fcf38-4096-452b-b965-78e606bf4fa1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:783fcf38-4096-452b-b965-78e606bf4fa1, vol_name:cephfs) < "" Nov 28 05:11:30 localhost nova_compute[280168]: 2025-11-28 10:11:30.886 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:30 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/783fcf38-4096-452b-b965-78e606bf4fa1/.meta.tmp' Nov 28 05:11:30 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/783fcf38-4096-452b-b965-78e606bf4fa1/.meta.tmp' to config b'/volumes/_nogroup/783fcf38-4096-452b-b965-78e606bf4fa1/.meta' Nov 28 05:11:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:783fcf38-4096-452b-b965-78e606bf4fa1, vol_name:cephfs) < "" Nov 28 05:11:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "783fcf38-4096-452b-b965-78e606bf4fa1", "format": "json"}]: dispatch Nov 28 05:11:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:783fcf38-4096-452b-b965-78e606bf4fa1, vol_name:cephfs) < "" Nov 28 05:11:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:783fcf38-4096-452b-b965-78e606bf4fa1, vol_name:cephfs) < "" Nov 28 05:11:31 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:11:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e274 e274: 6 total, 6 up, 6 in Nov 28 05:11:31 localhost podman[322998]: 2025-11-28 10:11:31.340155207 +0000 UTC m=+0.088615756 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, version=9.6, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git) Nov 28 05:11:31 localhost podman[322998]: 2025-11-28 10:11:31.353305383 +0000 UTC m=+0.101765922 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vcs-type=git, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal) Nov 28 05:11:31 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:11:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 172 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 211 KiB/s wr, 17 op/s Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:11:32 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 752b3d70-42ae-49ea-a3fa-11cef9b83bd0 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:11:32 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 752b3d70-42ae-49ea-a3fa-11cef9b83bd0 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:11:32 localhost ceph-mgr[286188]: [progress INFO root] Completed event 752b3d70-42ae-49ea-a3fa-11cef9b83bd0 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:11:32 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:11:32 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:11:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5 Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:11:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:11:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:11:33 localhost nova_compute[280168]: 2025-11-28 10:11:33.636 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v589: 177 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 172 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 211 KiB/s wr, 17 op/s Nov 28 05:11:34 localhost ovn_metadata_agent[158525]: 2025-11-28 10:11:34.073 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:11:34 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac978b2f-998a-4925-a071-fbc679b9c4b6", "format": "json"}]: dispatch Nov 28 05:11:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:34 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:34.523+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac978b2f-998a-4925-a071-fbc679b9c4b6' of type subvolume Nov 28 05:11:34 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ac978b2f-998a-4925-a071-fbc679b9c4b6' of type subvolume Nov 28 05:11:34 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac978b2f-998a-4925-a071-fbc679b9c4b6", "force": true, "format": "json"}]: dispatch Nov 28 05:11:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, vol_name:cephfs) < "" Nov 28 05:11:34 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ac978b2f-998a-4925-a071-fbc679b9c4b6'' moved to trashcan Nov 28 05:11:34 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ac978b2f-998a-4925-a071-fbc679b9c4b6, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "783fcf38-4096-452b-b965-78e606bf4fa1", "format": "json"}]: dispatch Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:783fcf38-4096-452b-b965-78e606bf4fa1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:783fcf38-4096-452b-b965-78e606bf4fa1, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:35.076+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '783fcf38-4096-452b-b965-78e606bf4fa1' of type subvolume Nov 28 05:11:35 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '783fcf38-4096-452b-b965-78e606bf4fa1' of type subvolume Nov 28 05:11:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "783fcf38-4096-452b-b965-78e606bf4fa1", "force": true, "format": "json"}]: dispatch Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:783fcf38-4096-452b-b965-78e606bf4fa1, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/783fcf38-4096-452b-b965-78e606bf4fa1'' moved to trashcan Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:783fcf38-4096-452b-b965-78e606bf4fa1, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:11:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:35 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:11:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:35 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 28 05:11:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 28 05:11:35 localhost nova_compute[280168]: 2025-11-28 10:11:35.918 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v590: 177 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 172 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 211 KiB/s wr, 17 op/s Nov 28 05:11:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:36 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:11:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0. Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.770544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324696770607, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 443, "num_deletes": 259, "total_data_size": 282092, "memory_usage": 292024, "flush_reason": "Manual Compaction"} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324696775323, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 185011, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30033, "largest_seqno": 30471, "table_properties": {"data_size": 182545, "index_size": 513, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6728, "raw_average_key_size": 18, "raw_value_size": 177115, "raw_average_value_size": 498, "num_data_blocks": 23, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324690, "oldest_key_time": 1764324690, "file_creation_time": 1764324696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 4844 microseconds, and 2011 cpu microseconds. Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.775392) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 185011 bytes OK Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.775425) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.778160) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.778189) EVENT_LOG_v1 {"time_micros": 1764324696778182, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.778218) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 279188, prev total WAL file size 287987, number of live WAL files 2. Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.779656) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323734' seq:72057594037927935, type:22 .. '6C6F676D0034353239' seq:0, type:0; will stop at (end) Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(180KB)], [45(18MB)] Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324696779710, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 19105333, "oldest_snapshot_seqno": -1} Nov 28 05:11:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13774 keys, 18715457 bytes, temperature: kUnknown Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324696907199, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18715457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18635965, "index_size": 43935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34501, "raw_key_size": 369648, "raw_average_key_size": 26, "raw_value_size": 18401186, "raw_average_value_size": 1335, "num_data_blocks": 1643, "num_entries": 13774, "num_filter_entries": 13774, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.907693) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18715457 bytes Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.909956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.6 rd, 146.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 18.0 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(204.4) write-amplify(101.2) OK, records in: 14315, records dropped: 541 output_compression: NoCompression Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.909995) EVENT_LOG_v1 {"time_micros": 1764324696909979, "job": 26, "event": "compaction_finished", "compaction_time_micros": 127706, "compaction_time_cpu_micros": 56630, "output_level": 6, "num_output_files": 1, "total_output_size": 18715457, "num_input_records": 14315, "num_output_records": 13774, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324696910228, "job": 26, "event": "table_file_deletion", "file_number": 47} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324696913773, "job": 26, "event": "table_file_deletion", "file_number": 45} Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.779567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.913816) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.913824) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.913828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.913832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:36 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:11:36.913836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:11:37 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "96f781a9-18ad-48a6-a288-5b831718a338", "format": "json"}]: dispatch Nov 28 05:11:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:96f781a9-18ad-48a6-a288-5b831718a338, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:96f781a9-18ad-48a6-a288-5b831718a338, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:37 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "96f781a9-18ad-48a6-a288-5b831718a338", "force": true, "format": "json"}]: dispatch Nov 28 05:11:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96f781a9-18ad-48a6-a288-5b831718a338, vol_name:cephfs) < "" Nov 28 05:11:37 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/96f781a9-18ad-48a6-a288-5b831718a338'' moved to trashcan Nov 28 05:11:37 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:96f781a9-18ad-48a6-a288-5b831718a338, vol_name:cephfs) < "" Nov 28 05:11:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 210 KiB/s wr, 17 op/s Nov 28 05:11:38 localhost nova_compute[280168]: 2025-11-28 10:11:38.638 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:39 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:39 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5 Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:11:39 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:11:39 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:11:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:11:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 210 KiB/s wr, 17 op/s Nov 28 05:11:40 localhost nova_compute[280168]: 2025-11-28 10:11:40.957 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "snap_name": "cdc57bbb-b20f-407d-9c9a-d134937d596e_d5b0525c-04f8-4742-a7f8-f2e936812088", "force": true, "format": "json"}]: dispatch Nov 28 05:11:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e_d5b0525c-04f8-4742-a7f8-f2e936812088, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:11:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' Nov 28 05:11:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta' Nov 28 05:11:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e_d5b0525c-04f8-4742-a7f8-f2e936812088, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:11:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "snap_name": "cdc57bbb-b20f-407d-9c9a-d134937d596e", "force": true, "format": "json"}]: dispatch Nov 28 05:11:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:11:41 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' Nov 28 05:11:41 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta.tmp' to config b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a/.meta' Nov 28 05:11:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cdc57bbb-b20f-407d-9c9a-d134937d596e, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:11:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e275 e275: 6 total, 6 up, 6 in Nov 28 05:11:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v594: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 189 KiB/s wr, 16 op/s Nov 28 05:11:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:42 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:42 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:42 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:42 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:43 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:43 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:43 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:43 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:43 localhost nova_compute[280168]: 2025-11-28 10:11:43.640 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:11:43 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:11:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 189 KiB/s wr, 16 op/s Nov 28 05:11:43 localhost podman[323089]: 2025-11-28 10:11:43.994781303 +0000 UTC m=+0.094162298 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm) Nov 28 05:11:44 localhost podman[323089]: 2025-11-28 10:11:44.005389929 +0000 UTC m=+0.104770954 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute) Nov 28 05:11:44 localhost podman[323091]: 2025-11-28 10:11:44.038397839 +0000 UTC m=+0.133009707 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125) Nov 28 05:11:44 localhost podman[323091]: 2025-11-28 10:11:44.043538537 +0000 UTC m=+0.138150505 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 05:11:44 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:11:44 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:11:44 localhost systemd[1]: tmp-crun.7JTEMD.mount: Deactivated successfully. Nov 28 05:11:44 localhost podman[323090]: 2025-11-28 10:11:44.10126588 +0000 UTC m=+0.201166051 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:11:44 localhost podman[323092]: 2025-11-28 10:11:44.159191787 +0000 UTC m=+0.249243165 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:11:44 localhost podman[323090]: 2025-11-28 10:11:44.164195942 +0000 UTC m=+0.264096063 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3) Nov 28 05:11:44 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:11:44 localhost podman[323092]: 2025-11-28 10:11:44.22242757 +0000 UTC m=+0.312478948 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:11:44 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:11:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "format": "json"}]: dispatch Nov 28 05:11:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:223d1419-7407-477a-a3da-e408c7b6c43a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:223d1419-7407-477a-a3da-e408c7b6c43a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:44 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '223d1419-7407-477a-a3da-e408c7b6c43a' of type subvolume Nov 28 05:11:44 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:44.262+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '223d1419-7407-477a-a3da-e408c7b6c43a' of type subvolume Nov 28 05:11:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "223d1419-7407-477a-a3da-e408c7b6c43a", "force": true, "format": "json"}]: dispatch Nov 28 05:11:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:11:44 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/223d1419-7407-477a-a3da-e408c7b6c43a'' moved to trashcan Nov 28 05:11:44 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:223d1419-7407-477a-a3da-e408c7b6c43a, vol_name:cephfs) < "" Nov 28 05:11:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e276 e276: 6 total, 6 up, 6 in Nov 28 05:11:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:45 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:11:45 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 89 KiB/s wr, 8 op/s Nov 28 05:11:45 localhost nova_compute[280168]: 2025-11-28 10:11:45.989 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "format": "json"}]: dispatch Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:46 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:46 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5 Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:46 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:11:46 localhost systemd[1]: tmp-crun.Dus8Mw.mount: Deactivated successfully. Nov 28 05:11:46 localhost podman[323178]: 2025-11-28 10:11:46.990673154 +0000 UTC m=+0.093632271 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:11:47 localhost podman[323178]: 2025-11-28 10:11:47.001715125 +0000 UTC m=+0.104674222 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:11:47 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:11:47 localhost nova_compute[280168]: 2025-11-28 10:11:47.387 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:47 localhost nova_compute[280168]: 2025-11-28 10:11:47.408 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:11:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:47 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta.tmp' Nov 28 05:11:47 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta.tmp' to config b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta' Nov 28 05:11:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "format": "json"}]: dispatch Nov 28 05:11:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 222 KiB/s wr, 20 op/s Nov 28 05:11:48 localhost nova_compute[280168]: 2025-11-28 10:11:48.642 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:48 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:11:48 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:48 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:48 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:48 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:49 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:49 localhost nova_compute[280168]: 2025-11-28 10:11:49.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:49 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "f5b4c96d-e791-4ea7-b052-7e31c47893d9", "format": "json"}]: dispatch Nov 28 05:11:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f5b4c96d-e791-4ea7-b052-7e31c47893d9, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f5b4c96d-e791-4ea7-b052-7e31c47893d9, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:49 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "tenant_id": "1562d9ae673b4a5ea5a1a571bd0ea2c8", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:49 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-459400664 with tenant 1562d9ae673b4a5ea5a1a571bd0ea2c8 Nov 28 05:11:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:49 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:49 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-459400664", "caps": ["mds", "allow rw path=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5", "osd", "allow rw pool=manila_data namespace=fsvolumens_3fe15641-5409-4db6-8856-5687ded3c0e8", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume authorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, tenant_id:1562d9ae673b4a5ea5a1a571bd0ea2c8, vol_name:cephfs) < "" Nov 28 05:11:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 626 B/s rd, 130 KiB/s wr, 12 op/s Nov 28 05:11:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "snap_name": "640bfb92-ee5b-44bb-be51-eab8b7a02ed4", "format": "json"}]: dispatch Nov 28 05:11:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:640bfb92-ee5b-44bb-be51-eab8b7a02ed4, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:640bfb92-ee5b-44bb-be51-eab8b7a02ed4, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:50 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:11:50.800 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:11:50Z, description=, device_id=4350f7ee-36aa-45f0-add8-5b47287dce0e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=3e97e68b-041f-44d0-baf0-5db6fd023bf3, ip_allocation=immediate, mac_address=fa:16:3e:27:bf:d6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3681, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:11:50Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:11:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:11:50.855 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:11:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:11:50.855 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:11:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:11:50.855 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:11:51 localhost nova_compute[280168]: 2025-11-28 10:11:51.034 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:51 localhost podman[323216]: 2025-11-28 10:11:51.073768899 +0000 UTC m=+0.066635428 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:11:51 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:11:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:11:51 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:11:51 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:11:51 localhost podman[323231]: 2025-11-28 10:11:51.195196957 +0000 UTC m=+0.094306163 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:11:51 localhost nova_compute[280168]: 2025-11-28 10:11:51.233 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:51 localhost podman[323231]: 2025-11-28 10:11:51.235688406 +0000 UTC m=+0.134797642 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:11:51 localhost nova_compute[280168]: 2025-11-28 10:11:51.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:51 localhost nova_compute[280168]: 2025-11-28 10:11:51.237 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:11:51 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:11:51 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:11:51.345 261346 INFO neutron.agent.dhcp.agent [None req-bf598181-cfc2-4864-910a-99b921982202 - - - - - -] DHCP configuration for ports {'3e97e68b-041f-44d0-baf0-5db6fd023bf3'} is completed#033[00m Nov 28 05:11:51 localhost nova_compute[280168]: 2025-11-28 10:11:51.471 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e277 e277: 6 total, 6 up, 6 in Nov 28 05:11:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v601: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 895 B/s rd, 233 KiB/s wr, 20 op/s Nov 28 05:11:52 localhost nova_compute[280168]: 2025-11-28 10:11:52.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:52 localhost nova_compute[280168]: 2025-11-28 10:11:52.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:11:52 localhost nova_compute[280168]: 2025-11-28 10:11:52.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:11:52 localhost nova_compute[280168]: 2025-11-28 10:11:52.261 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:11:52 localhost nova_compute[280168]: 2025-11-28 10:11:52.262 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:11:52 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:52 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:11:52 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "7b81adcd-a38a-49d6-b81f-5b50e8780854", "format": "json"}]: dispatch Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7b81adcd-a38a-49d6-b81f-5b50e8780854, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7b81adcd-a38a-49d6-b81f-5b50e8780854, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:52 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:11:52 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:11:52 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:11:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} v 0) Nov 28 05:11:52 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:52 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} v 0) Nov 28 05:11:52 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume deauthorize, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "auth_id": "tempest-cephx-id-459400664", "format": "json"}]: dispatch Nov 28 05:11:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-459400664, client_metadata.root=/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8/de8c16b9-d700-4f01-bec8-e5a6a7967ad5 Nov 28 05:11:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-459400664, format:json, prefix:fs subvolume evict, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:53 localhost nova_compute[280168]: 2025-11-28 10:11:53.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:53 localhost nova_compute[280168]: 2025-11-28 10:11:53.644 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:53 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:53 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-459400664", "format": "json"} : dispatch Nov 28 05:11:53 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"} : dispatch Nov 28 05:11:53 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-459400664"}]': finished Nov 28 05:11:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v602: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 858 B/s rd, 223 KiB/s wr, 19 op/s Nov 28 05:11:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "snap_name": "640bfb92-ee5b-44bb-be51-eab8b7a02ed4_ff125526-c3a4-469c-b6b2-27c868491137", "force": true, "format": "json"}]: dispatch Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:640bfb92-ee5b-44bb-be51-eab8b7a02ed4_ff125526-c3a4-469c-b6b2-27c868491137, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta.tmp' Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta.tmp' to config b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta' Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:640bfb92-ee5b-44bb-be51-eab8b7a02ed4_ff125526-c3a4-469c-b6b2-27c868491137, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "snap_name": "640bfb92-ee5b-44bb-be51-eab8b7a02ed4", "force": true, "format": "json"}]: dispatch Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:640bfb92-ee5b-44bb-be51-eab8b7a02ed4, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.263 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.263 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.264 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.264 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.264 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta.tmp' Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta.tmp' to config b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80/.meta' Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.299 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:640bfb92-ee5b-44bb-be51-eab8b7a02ed4, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:54 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:11:54 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1945970975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.776 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.997 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.998 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11467MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:11:54 localhost nova_compute[280168]: 2025-11-28 10:11:54.999 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:54.999 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.064 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.065 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.285 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.833 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.834 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.850 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.895 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 05:11:55 localhost nova_compute[280168]: 2025-11-28 10:11:55.919 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:11:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 186 KiB/s wr, 16 op/s Nov 28 05:11:56 localhost nova_compute[280168]: 2025-11-28 10:11:56.073 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:11:56 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:56 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:11:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:11:56 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:11:56 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/515171710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:11:56 localhost nova_compute[280168]: 2025-11-28 10:11:56.389 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:11:56 localhost nova_compute[280168]: 2025-11-28 10:11:56.396 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:11:56 localhost nova_compute[280168]: 2025-11-28 10:11:56.418 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:11:56 localhost nova_compute[280168]: 2025-11-28 10:11:56.421 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:11:56 localhost nova_compute[280168]: 2025-11-28 10:11:56.421 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:11:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "format": "json"}]: dispatch Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3fe15641-5409-4db6-8856-5687ded3c0e8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3fe15641-5409-4db6-8856-5687ded3c0e8, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:56 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:56.581+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3fe15641-5409-4db6-8856-5687ded3c0e8' of type subvolume Nov 28 05:11:56 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3fe15641-5409-4db6-8856-5687ded3c0e8' of type subvolume Nov 28 05:11:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3fe15641-5409-4db6-8856-5687ded3c0e8", "force": true, "format": "json"}]: dispatch Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3fe15641-5409-4db6-8856-5687ded3c0e8'' moved to trashcan Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3fe15641-5409-4db6-8856-5687ded3c0e8, vol_name:cephfs) < "" Nov 28 05:11:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:11:57 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:57 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:57 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:11:57 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:11:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "7b81adcd-a38a-49d6-b81f-5b50e8780854_1a1adf00-c956-42a5-9ff9-ac2b4556603f", "force": true, "format": "json"}]: dispatch Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b81adcd-a38a-49d6-b81f-5b50e8780854_1a1adf00-c956-42a5-9ff9-ac2b4556603f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b81adcd-a38a-49d6-b81f-5b50e8780854_1a1adf00-c956-42a5-9ff9-ac2b4556603f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "7b81adcd-a38a-49d6-b81f-5b50e8780854", "force": true, "format": "json"}]: dispatch Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b81adcd-a38a-49d6-b81f-5b50e8780854, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7b81adcd-a38a-49d6-b81f-5b50e8780854, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "format": "json"}]: dispatch Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:11:57.469+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '88c7260b-39e4-485d-ba81-9fc5fc185b80' of type subvolume Nov 28 05:11:57 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '88c7260b-39e4-485d-ba81-9fc5fc185b80' of type subvolume Nov 28 05:11:57 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "88c7260b-39e4-485d-ba81-9fc5fc185b80", "force": true, "format": "json"}]: dispatch Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/88c7260b-39e4-485d-ba81-9fc5fc185b80'' moved to trashcan Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:11:57 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:88c7260b-39e4-485d-ba81-9fc5fc185b80, vol_name:cephfs) < "" Nov 28 05:11:57 localhost openstack_network_exporter[240973]: ERROR 10:11:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:11:57 localhost openstack_network_exporter[240973]: ERROR 10:11:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:11:57 localhost openstack_network_exporter[240973]: ERROR 10:11:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:11:57 localhost openstack_network_exporter[240973]: ERROR 10:11:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:11:57 localhost openstack_network_exporter[240973]: Nov 28 05:11:57 localhost openstack_network_exporter[240973]: ERROR 10:11:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:11:57 localhost openstack_network_exporter[240973]: Nov 28 05:11:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 179 KiB/s wr, 14 op/s Nov 28 05:11:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:11:58.391 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:11:58Z, description=, device_id=c2d60d6f-83fc-4648-964a-020aeb44c54e, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=298c366b-f0f4-42f6-86e1-3759a03f1daf, ip_allocation=immediate, mac_address=fa:16:3e:a3:e3:6d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3695, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:11:58Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:11:58 localhost nova_compute[280168]: 2025-11-28 10:11:58.653 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:58 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:11:58 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:11:58 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:11:58 localhost podman[323320]: 2025-11-28 10:11:58.661646804 +0000 UTC m=+0.113473844 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:11:58 localhost podman[239012]: time="2025-11-28T10:11:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:11:58 localhost podman[239012]: @ - - [28/Nov/2025:10:11:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:11:58 localhost nova_compute[280168]: 2025-11-28 10:11:58.938 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:11:58 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:11:58.948 261346 INFO neutron.agent.dhcp.agent [None req-d62fd418-4170-4c1c-9799-adb0dffddf4b - - - - - -] DHCP configuration for ports {'298c366b-f0f4-42f6-86e1-3759a03f1daf'} is completed#033[00m Nov 28 05:11:58 localhost podman[239012]: @ - - [28/Nov/2025:10:11:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19253 "" "Go-http-client/1.1" Nov 28 05:11:59 localhost nova_compute[280168]: 2025-11-28 10:11:59.423 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:11:59 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:11:59 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:11:59 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:11:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:11:59 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:11:59 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:59 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:11:59 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:59 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:11:59 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:11:59 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:11:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v605: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 179 KiB/s wr, 14 op/s Nov 28 05:12:00 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:00 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:00 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:00 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:12:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f", "format": "json"}]: dispatch Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.628 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:12:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:12:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7c140f95-3467-4116-9762-d08cfc663c93", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta.tmp' Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta.tmp' to config b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta' Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7c140f95-3467-4116-9762-d08cfc663c93", "format": "json"}]: dispatch Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e278 e278: 6 total, 6 up, 6 in Nov 28 05:12:01 localhost nova_compute[280168]: 2025-11-28 10:12:01.107 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:01 localhost nova_compute[280168]: 2025-11-28 10:12:01.763 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:01 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:12:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v607: 177 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 167 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 170 KiB/s wr, 13 op/s Nov 28 05:12:01 localhost podman[323342]: 2025-11-28 10:12:01.975555313 +0000 UTC m=+0.083278122 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41) Nov 28 05:12:02 localhost podman[323342]: 2025-11-28 10:12:02.011863464 +0000 UTC m=+0.119586273 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6) Nov 28 05:12:02 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:12:02 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:12:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:12:02 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:02 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:02 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:02 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:02 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:03 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:03 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:03 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:03 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:03 localhost nova_compute[280168]: 2025-11-28 10:12:03.692 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7c140f95-3467-4116-9762-d08cfc663c93", "snap_name": "15a14553-6d16-4d08-bf64-a6b8c8a6c750", "format": "json"}]: dispatch Nov 28 05:12:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:15a14553-6d16-4d08-bf64-a6b8c8a6c750, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:15a14553-6d16-4d08-bf64-a6b8c8a6c750, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 167 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 170 KiB/s wr, 13 op/s Nov 28 05:12:04 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f_69807d00-efbf-4837-b2f8-58c7e5436c8f", "force": true, "format": "json"}]: dispatch Nov 28 05:12:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f_69807d00-efbf-4837-b2f8-58c7e5436c8f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:04 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:04 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f_69807d00-efbf-4837-b2f8-58c7e5436c8f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:04 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f", "force": true, "format": "json"}]: dispatch Nov 28 05:12:04 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:11d8e4ee-62d3-434d-8e1b-ad29c40b9e5f, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:12:05 Nov 28 05:12:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:12:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:12:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['.mgr', 'volumes', 'vms', 'manila_metadata', 'backups', 'images', 'manila_data'] Nov 28 05:12:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:12:05 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e279 e279: 6 total, 6 up, 6 in Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', )] Nov 28 05:12:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs' Nov 28 05:12:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 167 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 89 KiB/s wr, 7 op/s Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:12:06 localhost systemd-journald[48427]: Data hash table of /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal has a fill level at 75.0 (53724 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation. Nov 28 05:12:06 localhost systemd-journald[48427]: /run/log/journal/5cd59ba25ae47acac865224fa46a5f9e/system.journal: Journal header limits reached or header out-of-date, rotating. Nov 28 05:12:06 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0003255208333333333 quantized to 32 (current 32) Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:12:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00154799605667225 of space, bias 4.0, pg target 1.232204861111111 quantized to 16 (current 16) Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:12:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:12:06 localhost rsyslogd[758]: imjournal: journal files changed, reloading... [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ] Nov 28 05:12:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:12:06 localhost nova_compute[280168]: 2025-11-28 10:12:06.162 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:12:06 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:12:06 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:06 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:12:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:06 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:06 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:06 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:06 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:06 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:06 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:12:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e280 e280: 6 total, 6 up, 6 in Nov 28 05:12:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 235 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 57 op/s Nov 28 05:12:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7c140f95-3467-4116-9762-d08cfc663c93", "snap_name": "15a14553-6d16-4d08-bf64-a6b8c8a6c750_fac779c6-92c9-42a2-b987-c4a675360397", "force": true, "format": "json"}]: dispatch Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:15a14553-6d16-4d08-bf64-a6b8c8a6c750_fac779c6-92c9-42a2-b987-c4a675360397, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta.tmp' Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta.tmp' to config b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta' Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:15a14553-6d16-4d08-bf64-a6b8c8a6c750_fac779c6-92c9-42a2-b987-c4a675360397, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7c140f95-3467-4116-9762-d08cfc663c93", "snap_name": "15a14553-6d16-4d08-bf64-a6b8c8a6c750", "force": true, "format": "json"}]: dispatch Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:15a14553-6d16-4d08-bf64-a6b8c8a6c750, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta.tmp' Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta.tmp' to config b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93/.meta' Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:15a14553-6d16-4d08-bf64-a6b8c8a6c750, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:08 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "fff24b7d-2f6c-4225-9289-9551d74f54fa", "format": "json"}]: dispatch Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fff24b7d-2f6c-4225-9289-9551d74f54fa, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:08 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fff24b7d-2f6c-4225-9289-9551d74f54fa, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:08 localhost nova_compute[280168]: 2025-11-28 10:12:08.746 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:09 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:12:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:09 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:09 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:09 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:09 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:09 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:09 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v613: 177 pgs: 177 active+clean; 235 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 2.7 MiB/s wr, 41 op/s Nov 28 05:12:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e281 e281: 6 total, 6 up, 6 in Nov 28 05:12:11 localhost nova_compute[280168]: 2025-11-28 10:12:11.225 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:11 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7c140f95-3467-4116-9762-d08cfc663c93", "format": "json"}]: dispatch Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7c140f95-3467-4116-9762-d08cfc663c93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7c140f95-3467-4116-9762-d08cfc663c93, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:12:11.561+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7c140f95-3467-4116-9762-d08cfc663c93' of type subvolume Nov 28 05:12:11 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7c140f95-3467-4116-9762-d08cfc663c93' of type subvolume Nov 28 05:12:11 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7c140f95-3467-4116-9762-d08cfc663c93", "force": true, "format": "json"}]: dispatch Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7c140f95-3467-4116-9762-d08cfc663c93'' moved to trashcan Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7c140f95-3467-4116-9762-d08cfc663c93, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e282 e282: 6 total, 6 up, 6 in Nov 28 05:12:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:11 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "fff24b7d-2f6c-4225-9289-9551d74f54fa_ead2690c-f6f8-4b05-b16e-a6295cb33d61", "force": true, "format": "json"}]: dispatch Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff24b7d-2f6c-4225-9289-9551d74f54fa_ead2690c-f6f8-4b05-b16e-a6295cb33d61, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff24b7d-2f6c-4225-9289-9551d74f54fa_ead2690c-f6f8-4b05-b16e-a6295cb33d61, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "fff24b7d-2f6c-4225-9289-9551d74f54fa", "force": true, "format": "json"}]: dispatch Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff24b7d-2f6c-4225-9289-9551d74f54fa, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 3.7 MiB/s wr, 92 op/s Nov 28 05:12:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fff24b7d-2f6c-4225-9289-9551d74f54fa, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:12 localhost nova_compute[280168]: 2025-11-28 10:12:12.133 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:12 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:12:12 localhost podman[323379]: 2025-11-28 10:12:12.156737544 +0000 UTC m=+0.072563440 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:12:12 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:12:12 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:12:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:12 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:12:12 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:12 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:12 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:12 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:12:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1973228268' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:12:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:12:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1973228268' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:12:13 localhost nova_compute[280168]: 2025-11-28 10:12:13.789 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:13 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:13 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:13 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:13 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:12:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v617: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 226 KiB/s wr, 42 op/s Nov 28 05:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:12:14 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:12:15 localhost podman[323404]: 2025-11-28 10:12:15.012481521 +0000 UTC m=+0.100466573 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:12:15 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "fde565fd-d88b-4da4-8499-bcfdb95820a1", "format": "json"}]: dispatch Nov 28 05:12:15 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fde565fd-d88b-4da4-8499-bcfdb95820a1, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:15 localhost podman[323404]: 2025-11-28 10:12:15.018663241 +0000 UTC m=+0.106648313 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:12:15 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:12:15 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fde565fd-d88b-4da4-8499-bcfdb95820a1, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:15 localhost podman[323401]: 2025-11-28 10:12:15.071179873 +0000 UTC m=+0.169741461 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute) Nov 28 05:12:15 localhost podman[323403]: 2025-11-28 10:12:15.125974274 +0000 UTC m=+0.218269799 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 05:12:15 localhost podman[323402]: 2025-11-28 10:12:15.179412753 +0000 UTC m=+0.274022799 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:12:15 localhost podman[323401]: 2025-11-28 10:12:15.207825021 +0000 UTC m=+0.306386649 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute) Nov 28 05:12:15 localhost podman[323402]: 2025-11-28 10:12:15.216261722 +0000 UTC m=+0.310871828 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 05:12:15 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:12:15 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:12:15 localhost podman[323403]: 2025-11-28 10:12:15.26157896 +0000 UTC m=+0.353874525 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true) Nov 28 05:12:15 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:12:15 localhost nova_compute[280168]: 2025-11-28 10:12:15.466 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:15 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:12:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:12:15 localhost podman[323502]: 2025-11-28 10:12:15.501715134 +0000 UTC m=+0.060281743 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:12:15 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:12:15 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e283 e283: 6 total, 6 up, 6 in Nov 28 05:12:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 141 KiB/s wr, 36 op/s Nov 28 05:12:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:12:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:16 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:16 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:16 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:16 localhost nova_compute[280168]: 2025-11-28 10:12:16.229 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e284 e284: 6 total, 6 up, 6 in Nov 28 05:12:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:16 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:16 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:16 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:12:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 831 B/s rd, 130 KiB/s wr, 11 op/s Nov 28 05:12:17 localhost podman[323520]: 2025-11-28 10:12:17.982182814 +0000 UTC m=+0.088296077 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:12:17 localhost podman[323520]: 2025-11-28 10:12:17.992576465 +0000 UTC m=+0.098689758 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:12:18 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:12:18 localhost nova_compute[280168]: 2025-11-28 10:12:18.826 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:19 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:19 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:12:19 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "fde565fd-d88b-4da4-8499-bcfdb95820a1_3ca8ffc5-e61a-4a5f-afb0-e3f73884eb70", "force": true, "format": "json"}]: dispatch Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fde565fd-d88b-4da4-8499-bcfdb95820a1_3ca8ffc5-e61a-4a5f-afb0-e3f73884eb70, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fde565fd-d88b-4da4-8499-bcfdb95820a1_3ca8ffc5-e61a-4a5f-afb0-e3f73884eb70, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "fde565fd-d88b-4da4-8499-bcfdb95820a1", "force": true, "format": "json"}]: dispatch Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fde565fd-d88b-4da4-8499-bcfdb95820a1, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:19 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fde565fd-d88b-4da4-8499-bcfdb95820a1, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 100 KiB/s wr, 8 op/s Nov 28 05:12:20 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:20 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:20 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:20 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:12:21 localhost nova_compute[280168]: 2025-11-28 10:12:21.266 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e285 e285: 6 total, 6 up, 6 in Nov 28 05:12:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:12:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 224 KiB/s wr, 18 op/s Nov 28 05:12:21 localhost podman[323544]: 2025-11-28 10:12:21.975243269 +0000 UTC m=+0.083619452 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 05:12:22 localhost podman[323544]: 2025-11-28 10:12:22.016553565 +0000 UTC m=+0.124929728 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:12:22 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:12:22 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:12:22.540 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:12:22Z, description=, device_id=a2e46648-c204-4aa4-852a-22559e830378, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=07485490-1606-4c57-81f4-a8dc7bf2ee54, ip_allocation=immediate, mac_address=fa:16:3e:fa:9a:f8, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3765, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:12:22Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:12:22 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:12:22 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:12:22 localhost podman[323580]: 2025-11-28 10:12:22.762192703 +0000 UTC m=+0.063382677 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:12:22 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:12:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e286 e286: 6 total, 6 up, 6 in Nov 28 05:12:22 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:12:22 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:12:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:22 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:22 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:22 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:23 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:12:23.022 261346 INFO neutron.agent.dhcp.agent [None req-f72fcaf0-cdef-4bff-a079-719b622765fd - - - - - -] DHCP configuration for ports {'07485490-1606-4c57-81f4-a8dc7bf2ee54'} is completed#033[00m Nov 28 05:12:23 localhost nova_compute[280168]: 2025-11-28 10:12:23.353 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "91be3195-e4e7-4f0e-9446-564a14aa6ee3", "format": "json"}]: dispatch Nov 28 05:12:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:91be3195-e4e7-4f0e-9446-564a14aa6ee3, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:91be3195-e4e7-4f0e-9446-564a14aa6ee3, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:23 localhost nova_compute[280168]: 2025-11-28 10:12:23.828 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 286 B/s rd, 77 KiB/s wr, 5 op/s Nov 28 05:12:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 69 KiB/s wr, 5 op/s Nov 28 05:12:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:12:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:12:26 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:12:26 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:12:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:12:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:26 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:26 localhost nova_compute[280168]: 2025-11-28 10:12:26.312 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:26 localhost nova_compute[280168]: 2025-11-28 10:12:26.652 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:12:27 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:27 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:12:27 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:12:27 localhost openstack_network_exporter[240973]: ERROR 10:12:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:12:27 localhost openstack_network_exporter[240973]: ERROR 10:12:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:12:27 localhost openstack_network_exporter[240973]: ERROR 10:12:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:12:27 localhost openstack_network_exporter[240973]: ERROR 10:12:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:12:27 localhost openstack_network_exporter[240973]: Nov 28 05:12:27 localhost openstack_network_exporter[240973]: ERROR 10:12:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:12:27 localhost openstack_network_exporter[240973]: Nov 28 05:12:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a152cb30-8073-4c75-8e60-d299f2f23f89", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a152cb30-8073-4c75-8e60-d299f2f23f89, vol_name:cephfs) < "" Nov 28 05:12:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a152cb30-8073-4c75-8e60-d299f2f23f89/.meta.tmp' Nov 28 05:12:27 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a152cb30-8073-4c75-8e60-d299f2f23f89/.meta.tmp' to config b'/volumes/_nogroup/a152cb30-8073-4c75-8e60-d299f2f23f89/.meta' Nov 28 05:12:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a152cb30-8073-4c75-8e60-d299f2f23f89, vol_name:cephfs) < "" Nov 28 05:12:27 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a152cb30-8073-4c75-8e60-d299f2f23f89", "format": "json"}]: dispatch Nov 28 05:12:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a152cb30-8073-4c75-8e60-d299f2f23f89, vol_name:cephfs) < "" Nov 28 05:12:27 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a152cb30-8073-4c75-8e60-d299f2f23f89, vol_name:cephfs) < "" Nov 28 05:12:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v628: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 151 KiB/s wr, 10 op/s Nov 28 05:12:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "91be3195-e4e7-4f0e-9446-564a14aa6ee3_e616749c-df0b-458b-9f06-7163c80539a0", "force": true, "format": "json"}]: dispatch Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91be3195-e4e7-4f0e-9446-564a14aa6ee3_e616749c-df0b-458b-9f06-7163c80539a0, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91be3195-e4e7-4f0e-9446-564a14aa6ee3_e616749c-df0b-458b-9f06-7163c80539a0, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "91be3195-e4e7-4f0e-9446-564a14aa6ee3", "force": true, "format": "json"}]: dispatch Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91be3195-e4e7-4f0e-9446-564a14aa6ee3, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:91be3195-e4e7-4f0e-9446-564a14aa6ee3, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:28 localhost nova_compute[280168]: 2025-11-28 10:12:28.854 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:28 localhost podman[239012]: time="2025-11-28T10:12:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:12:28 localhost podman[239012]: @ - - [28/Nov/2025:10:12:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:12:28 localhost podman[239012]: @ - - [28/Nov/2025:10:12:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19241 "" "Go-http-client/1.1" Nov 28 05:12:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:12:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:12:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:29 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:29 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:29 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 80 KiB/s wr, 5 op/s Nov 28 05:12:30 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:30 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:30 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:30 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "535c32b0-23e4-4a2c-bbae-552340590760", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:535c32b0-23e4-4a2c-bbae-552340590760, vol_name:cephfs) < "" Nov 28 05:12:31 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/535c32b0-23e4-4a2c-bbae-552340590760/.meta.tmp' Nov 28 05:12:31 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/535c32b0-23e4-4a2c-bbae-552340590760/.meta.tmp' to config b'/volumes/_nogroup/535c32b0-23e4-4a2c-bbae-552340590760/.meta' Nov 28 05:12:31 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:535c32b0-23e4-4a2c-bbae-552340590760, vol_name:cephfs) < "" Nov 28 05:12:31 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "535c32b0-23e4-4a2c-bbae-552340590760", "format": "json"}]: dispatch Nov 28 05:12:31 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:535c32b0-23e4-4a2c-bbae-552340590760, vol_name:cephfs) < "" Nov 28 05:12:31 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:535c32b0-23e4-4a2c-bbae-552340590760, vol_name:cephfs) < "" Nov 28 05:12:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e287 e287: 6 total, 6 up, 6 in Nov 28 05:12:31 localhost nova_compute[280168]: 2025-11-28 10:12:31.354 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e288 e288: 6 total, 6 up, 6 in Nov 28 05:12:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v632: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 178 KiB/s wr, 12 op/s Nov 28 05:12:32 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:12:32 localhost podman[323620]: 2025-11-28 10:12:32.553355274 +0000 UTC m=+0.084215431 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter) Nov 28 05:12:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:12:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:32 localhost podman[323620]: 2025-11-28 10:12:32.566237892 +0000 UTC m=+0.097098059 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7) Nov 28 05:12:32 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:12:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:12:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:12:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:12:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:12:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:33 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "f5b4c96d-e791-4ea7-b052-7e31c47893d9_0d92dab8-838f-4c0e-9a76-ed8d13f07bb1", "force": true, "format": "json"}]: dispatch Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5b4c96d-e791-4ea7-b052-7e31c47893d9_0d92dab8-838f-4c0e-9a76-ed8d13f07bb1, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5b4c96d-e791-4ea7-b052-7e31c47893d9_0d92dab8-838f-4c0e-9a76-ed8d13f07bb1, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:33 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "snap_name": "f5b4c96d-e791-4ea7-b052-7e31c47893d9", "force": true, "format": "json"}]: dispatch Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5b4c96d-e791-4ea7-b052-7e31c47893d9, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:33 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:12:33 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:12:33 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:12:33 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:12:33 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta.tmp' to config b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee/.meta' Nov 28 05:12:33 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 0af48f44-05b6-4c2e-a412-8b8a1114462d (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:12:33 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 0af48f44-05b6-4c2e-a412-8b8a1114462d (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:12:33 localhost ceph-mgr[286188]: [progress INFO root] Completed event 0af48f44-05b6-4c2e-a412-8b8a1114462d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:12:33 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:12:33 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:12:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f5b4c96d-e791-4ea7-b052-7e31c47893d9, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:12:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:12:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:12:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:12:33 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:12:33 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:12:33 localhost nova_compute[280168]: 2025-11-28 10:12:33.858 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 178 KiB/s wr, 12 op/s Nov 28 05:12:34 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "535c32b0-23e4-4a2c-bbae-552340590760", "format": "json"}]: dispatch Nov 28 05:12:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:535c32b0-23e4-4a2c-bbae-552340590760, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:535c32b0-23e4-4a2c-bbae-552340590760, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:34 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:12:34.567+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '535c32b0-23e4-4a2c-bbae-552340590760' of type subvolume Nov 28 05:12:34 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '535c32b0-23e4-4a2c-bbae-552340590760' of type subvolume Nov 28 05:12:34 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "535c32b0-23e4-4a2c-bbae-552340590760", "force": true, "format": "json"}]: dispatch Nov 28 05:12:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:535c32b0-23e4-4a2c-bbae-552340590760, vol_name:cephfs) < "" Nov 28 05:12:34 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/535c32b0-23e4-4a2c-bbae-552340590760'' moved to trashcan Nov 28 05:12:34 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:12:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:535c32b0-23e4-4a2c-bbae-552340590760, vol_name:cephfs) < "" Nov 28 05:12:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e289 e289: 6 total, 6 up, 6 in Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:12:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:12:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:35 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:12:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:12:35 localhost nova_compute[280168]: 2025-11-28 10:12:35.957 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:35 localhost ovn_metadata_agent[158525]: 2025-11-28 10:12:35.958 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:12:35 localhost ovn_metadata_agent[158525]: 2025-11-28 10:12:35.959 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:12:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v635: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 127 KiB/s wr, 9 op/s Nov 28 05:12:36 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:12:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:12:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "format": "json"}]: dispatch Nov 28 05:12:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c66addb9-39a7-4478-8dc0-78ac198152ee, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:36 localhost nova_compute[280168]: 2025-11-28 10:12:36.397 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c66addb9-39a7-4478-8dc0-78ac198152ee, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:12:36.402+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c66addb9-39a7-4478-8dc0-78ac198152ee' of type subvolume Nov 28 05:12:36 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c66addb9-39a7-4478-8dc0-78ac198152ee' of type subvolume Nov 28 05:12:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c66addb9-39a7-4478-8dc0-78ac198152ee", "force": true, "format": "json"}]: dispatch Nov 28 05:12:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c66addb9-39a7-4478-8dc0-78ac198152ee'' moved to trashcan Nov 28 05:12:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:12:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c66addb9-39a7-4478-8dc0-78ac198152ee, vol_name:cephfs) < "" Nov 28 05:12:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:12:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:37 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:12:37 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:12:37 localhost podman[323726]: 2025-11-28 10:12:37.399192403 +0000 UTC m=+0.064026907 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:12:37 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:12:37 localhost nova_compute[280168]: 2025-11-28 10:12:37.835 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:37 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a152cb30-8073-4c75-8e60-d299f2f23f89", "format": "json"}]: dispatch Nov 28 05:12:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a152cb30-8073-4c75-8e60-d299f2f23f89, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a152cb30-8073-4c75-8e60-d299f2f23f89, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:37 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:12:37.896+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a152cb30-8073-4c75-8e60-d299f2f23f89' of type subvolume Nov 28 05:12:37 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a152cb30-8073-4c75-8e60-d299f2f23f89' of type subvolume Nov 28 05:12:37 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a152cb30-8073-4c75-8e60-d299f2f23f89", "force": true, "format": "json"}]: dispatch Nov 28 05:12:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a152cb30-8073-4c75-8e60-d299f2f23f89, vol_name:cephfs) < "" Nov 28 05:12:37 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a152cb30-8073-4c75-8e60-d299f2f23f89'' moved to trashcan Nov 28 05:12:37 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:12:37 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a152cb30-8073-4c75-8e60-d299f2f23f89, vol_name:cephfs) < "" Nov 28 05:12:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.0 KiB/s rd, 280 KiB/s wr, 18 op/s Nov 28 05:12:38 localhost nova_compute[280168]: 2025-11-28 10:12:38.893 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:12:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:12:39 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:39 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:12:39 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:12:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:12:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v637: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 502 B/s rd, 142 KiB/s wr, 8 op/s Nov 28 05:12:41 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "a1a9bb3f-1d01-4004-8dd6-66f4f23a71c5", "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:a1a9bb3f-1d01-4004-8dd6-66f4f23a71c5, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 28 05:12:41 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:a1a9bb3f-1d01-4004-8dd6-66f4f23a71c5, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 28 05:12:41 localhost nova_compute[280168]: 2025-11-28 10:12:41.453 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 e290: 6 total, 6 up, 6 in Nov 28 05:12:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:41 localhost ovn_metadata_agent[158525]: 2025-11-28 10:12:41.962 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:12:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v639: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 228 KiB/s wr, 13 op/s Nov 28 05:12:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:12:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:12:42 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:42 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:42 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:42 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:42 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:42 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:42 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:42 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:43 localhost nova_compute[280168]: 2025-11-28 10:12:43.936 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 963 B/s rd, 215 KiB/s wr, 12 op/s Nov 28 05:12:44 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "a1a9bb3f-1d01-4004-8dd6-66f4f23a71c5", "force": true, "format": "json"}]: dispatch Nov 28 05:12:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a1a9bb3f-1d01-4004-8dd6-66f4f23a71c5, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 28 05:12:44 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:a1a9bb3f-1d01-4004-8dd6-66f4f23a71c5, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 28 05:12:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:12:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0) Nov 28 05:12:45 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:45 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0) Nov 28 05:12:45 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:45 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice", "format": "json"}]: dispatch Nov 28 05:12:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:45 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:45 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:12:45 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:12:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v641: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 183 KiB/s wr, 10 op/s Nov 28 05:12:45 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:12:45.984 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:12:45Z, description=, device_id=4b3fe2b0-a7cf-43ab-948e-4143df334636, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=78d57a6d-83f8-4c5d-bd32-76f006a72d19, ip_allocation=immediate, mac_address=fa:16:3e:81:fc:b6, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3793, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:12:45Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:12:45 localhost systemd[1]: tmp-crun.BjPi7Y.mount: Deactivated successfully. Nov 28 05:12:46 localhost podman[323750]: 2025-11-28 10:12:46.001860665 +0000 UTC m=+0.101678309 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0) Nov 28 05:12:46 localhost podman[323751]: 2025-11-28 10:12:46.043437239 +0000 UTC m=+0.139569360 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible) Nov 28 05:12:46 localhost podman[323749]: 2025-11-28 10:12:46.098273451 +0000 UTC m=+0.199389136 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible) Nov 28 05:12:46 localhost podman[323752]: 2025-11-28 10:12:46.145510489 +0000 UTC m=+0.237180872 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:12:46 localhost podman[323752]: 2025-11-28 10:12:46.158602074 +0000 UTC m=+0.250272417 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:12:46 localhost podman[323749]: 2025-11-28 10:12:46.165806297 +0000 UTC m=+0.266921962 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3) Nov 28 05:12:46 localhost podman[323750]: 2025-11-28 10:12:46.175040841 +0000 UTC m=+0.274858525 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:12:46 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:12:46 localhost podman[323751]: 2025-11-28 10:12:46.176142225 +0000 UTC m=+0.272274346 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 28 05:12:46 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:12:46 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:12:46 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:12:46 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:12:46 localhost podman[323848]: 2025-11-28 10:12:46.29550357 +0000 UTC m=+0.122318197 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:12:46 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:12:46 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:12:46 localhost nova_compute[280168]: 2025-11-28 10:12:46.492 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:46 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:12:46.491 261346 INFO neutron.agent.dhcp.agent [None req-2470b25d-957b-48cf-a593-e8ac202053bf - - - - - -] DHCP configuration for ports {'78d57a6d-83f8-4c5d-bd32-76f006a72d19'} is completed#033[00m Nov 28 05:12:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch Nov 28 05:12:46 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch Nov 28 05:12:46 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished Nov 28 05:12:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:47 localhost nova_compute[280168]: 2025-11-28 10:12:47.258 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:47 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "32cdd268-82b3-4d40-9663-e0bea21b28c3", "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:32cdd268-82b3-4d40-9663-e0bea21b28c3, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 28 05:12:47 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:32cdd268-82b3-4d40-9663-e0bea21b28c3, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < "" Nov 28 05:12:47 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v642: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 140 KiB/s wr, 8 op/s Nov 28 05:12:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:12:48 localhost nova_compute[280168]: 2025-11-28 10:12:48.968 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:49 localhost podman[323869]: 2025-11-28 10:12:49.001822253 +0000 UTC m=+0.105842678 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:12:49 localhost podman[323869]: 2025-11-28 10:12:49.011492521 +0000 UTC m=+0.115512886 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:12:49 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:12:49 localhost nova_compute[280168]: 2025-11-28 10:12:49.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:49 localhost nova_compute[280168]: 2025-11-28 10:12:49.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:49 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:12:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:49 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:49 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:49 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:49 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:49 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:49 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 140 KiB/s wr, 8 op/s Nov 28 05:12:50 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:12:50.146 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:12:49Z, description=, device_id=c6a3c308-ab33-40a1-9933-91a047698d13, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=aeec12f0-7000-43ed-81b1-6c08563e6d70, ip_allocation=immediate, mac_address=fa:16:3e:eb:bc:41, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3806, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:12:50Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:12:50 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:12:50 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:12:50 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:12:50 localhost podman[323909]: 2025-11-28 10:12:50.462300238 +0000 UTC m=+0.049313663 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:12:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "32cdd268-82b3-4d40-9663-e0bea21b28c3", "force": true, "format": "json"}]: dispatch Nov 28 05:12:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:32cdd268-82b3-4d40-9663-e0bea21b28c3, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 28 05:12:50 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:50 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:50 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:50 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:32cdd268-82b3-4d40-9663-e0bea21b28c3, prefix:fs subvolumegroup rm, vol_name:cephfs) < "" Nov 28 05:12:50 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:12:50.718 261346 INFO neutron.agent.dhcp.agent [None req-6f2fcf49-eb70-479b-bb2d-79344f63fe33 - - - - - -] DHCP configuration for ports {'aeec12f0-7000-43ed-81b1-6c08563e6d70'} is completed#033[00m Nov 28 05:12:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:12:50.855 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:12:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:12:50.858 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:12:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:12:50.858 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:12:51 localhost nova_compute[280168]: 2025-11-28 10:12:51.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:51 localhost nova_compute[280168]: 2025-11-28 10:12:51.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:12:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2f14e343-1793-4cb4-b8c5-3069f088f109", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f14e343-1793-4cb4-b8c5-3069f088f109, vol_name:cephfs) < "" Nov 28 05:12:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2f14e343-1793-4cb4-b8c5-3069f088f109/.meta.tmp' Nov 28 05:12:51 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2f14e343-1793-4cb4-b8c5-3069f088f109/.meta.tmp' to config b'/volumes/_nogroup/2f14e343-1793-4cb4-b8c5-3069f088f109/.meta' Nov 28 05:12:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2f14e343-1793-4cb4-b8c5-3069f088f109, vol_name:cephfs) < "" Nov 28 05:12:51 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f14e343-1793-4cb4-b8c5-3069f088f109", "format": "json"}]: dispatch Nov 28 05:12:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f14e343-1793-4cb4-b8c5-3069f088f109, vol_name:cephfs) < "" Nov 28 05:12:51 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2f14e343-1793-4cb4-b8c5-3069f088f109, vol_name:cephfs) < "" Nov 28 05:12:51 localhost nova_compute[280168]: 2025-11-28 10:12:51.477 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:51 localhost nova_compute[280168]: 2025-11-28 10:12:51.494 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:51 localhost nova_compute[280168]: 2025-11-28 10:12:51.775 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:51 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v644: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 302 B/s rd, 130 KiB/s wr, 8 op/s Nov 28 05:12:52 localhost nova_compute[280168]: 2025-11-28 10:12:52.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:52 localhost nova_compute[280168]: 2025-11-28 10:12:52.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:12:52 localhost nova_compute[280168]: 2025-11-28 10:12:52.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:12:52 localhost nova_compute[280168]: 2025-11-28 10:12:52.449 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:12:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:12:52 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:52 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:52 localhost podman[323929]: 2025-11-28 10:12:52.979885445 +0000 UTC m=+0.087264255 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:12:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:53 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:53 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:12:53 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:53 localhost podman[323929]: 2025-11-28 10:12:53.018557679 +0000 UTC m=+0.125936439 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd) Nov 28 05:12:53 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:12:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:53 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:12:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:12:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:53 localhost nova_compute[280168]: 2025-11-28 10:12:53.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:53 localhost nova_compute[280168]: 2025-11-28 10:12:53.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:53 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:53 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:53 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:12:53 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:12:53 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 110 KiB/s wr, 6 op/s Nov 28 05:12:54 localhost nova_compute[280168]: 2025-11-28 10:12:54.011 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:54 localhost nova_compute[280168]: 2025-11-28 10:12:54.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f14e343-1793-4cb4-b8c5-3069f088f109", "format": "json"}]: dispatch Nov 28 05:12:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2f14e343-1793-4cb4-b8c5-3069f088f109, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2f14e343-1793-4cb4-b8c5-3069f088f109, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:12:54 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:12:54.667+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f14e343-1793-4cb4-b8c5-3069f088f109' of type subvolume Nov 28 05:12:54 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2f14e343-1793-4cb4-b8c5-3069f088f109' of type subvolume Nov 28 05:12:54 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f14e343-1793-4cb4-b8c5-3069f088f109", "force": true, "format": "json"}]: dispatch Nov 28 05:12:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f14e343-1793-4cb4-b8c5-3069f088f109, vol_name:cephfs) < "" Nov 28 05:12:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2f14e343-1793-4cb4-b8c5-3069f088f109'' moved to trashcan Nov 28 05:12:54 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:12:54 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2f14e343-1793-4cb4-b8c5-3069f088f109, vol_name:cephfs) < "" Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.274 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.274 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.274 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.275 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.275 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:12:55 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:12:55 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1162585651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.779 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.971 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.973 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11448MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.973 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:12:55 localhost nova_compute[280168]: 2025-11-28 10:12:55.973 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:12:55 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v646: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 110 KiB/s wr, 6 op/s Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.105 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.106 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.121 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:12:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:12:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:12:56 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:56 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice_bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:12:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:12:56 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.536 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:12:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:12:56 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/44064503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.586 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.592 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.606 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.609 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.609 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:12:56 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:12:56 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:56 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:12:56 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:12:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:12:56 localhost nova_compute[280168]: 2025-11-28 10:12:56.861 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:57 localhost openstack_network_exporter[240973]: ERROR 10:12:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:12:57 localhost openstack_network_exporter[240973]: ERROR 10:12:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:12:57 localhost openstack_network_exporter[240973]: ERROR 10:12:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:12:57 localhost openstack_network_exporter[240973]: ERROR 10:12:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:12:57 localhost openstack_network_exporter[240973]: Nov 28 05:12:57 localhost openstack_network_exporter[240973]: ERROR 10:12:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:12:57 localhost openstack_network_exporter[240973]: Nov 28 05:12:57 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 180 KiB/s wr, 11 op/s Nov 28 05:12:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "028817fc-1877-4dba-8e2f-8daf5bd38000", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:12:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:028817fc-1877-4dba-8e2f-8daf5bd38000, vol_name:cephfs) < "" Nov 28 05:12:58 localhost ceph-osd[32393]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3. Nov 28 05:12:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/028817fc-1877-4dba-8e2f-8daf5bd38000/.meta.tmp' Nov 28 05:12:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/028817fc-1877-4dba-8e2f-8daf5bd38000/.meta.tmp' to config b'/volumes/_nogroup/028817fc-1877-4dba-8e2f-8daf5bd38000/.meta' Nov 28 05:12:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:028817fc-1877-4dba-8e2f-8daf5bd38000, vol_name:cephfs) < "" Nov 28 05:12:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "028817fc-1877-4dba-8e2f-8daf5bd38000", "format": "json"}]: dispatch Nov 28 05:12:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:028817fc-1877-4dba-8e2f-8daf5bd38000, vol_name:cephfs) < "" Nov 28 05:12:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:028817fc-1877-4dba-8e2f-8daf5bd38000, vol_name:cephfs) < "" Nov 28 05:12:58 localhost podman[239012]: time="2025-11-28T10:12:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:12:58 localhost podman[239012]: @ - - [28/Nov/2025:10:12:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:12:58 localhost podman[239012]: @ - - [28/Nov/2025:10:12:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19255 "" "Go-http-client/1.1" Nov 28 05:12:59 localhost nova_compute[280168]: 2025-11-28 10:12:59.042 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:12:59 localhost nova_compute[280168]: 2025-11-28 10:12:59.611 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:12:59 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:12:59 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:12:59 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v648: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 119 KiB/s wr, 7 op/s Nov 28 05:13:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0) Nov 28 05:13:00 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:13:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0) Nov 28 05:13:00 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:13:00 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:13:00 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch Nov 28 05:13:00 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch Nov 28 05:13:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice_bob", "format": "json"}]: dispatch Nov 28 05:13:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:13:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:13:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:01 localhost systemd[1]: tmp-crun.0L4EHz.mount: Deactivated successfully. Nov 28 05:13:01 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:13:01 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:13:01 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:13:01 localhost podman[324013]: 2025-11-28 10:13:01.068857929 +0000 UTC m=+0.062113848 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:13:01 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished Nov 28 05:13:01 localhost nova_compute[280168]: 2025-11-28 10:13:01.136 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "028817fc-1877-4dba-8e2f-8daf5bd38000", "format": "json"}]: dispatch Nov 28 05:13:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:028817fc-1877-4dba-8e2f-8daf5bd38000, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:13:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:028817fc-1877-4dba-8e2f-8daf5bd38000, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:13:01 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:13:01.339+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '028817fc-1877-4dba-8e2f-8daf5bd38000' of type subvolume Nov 28 05:13:01 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '028817fc-1877-4dba-8e2f-8daf5bd38000' of type subvolume Nov 28 05:13:01 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "028817fc-1877-4dba-8e2f-8daf5bd38000", "force": true, "format": "json"}]: dispatch Nov 28 05:13:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:028817fc-1877-4dba-8e2f-8daf5bd38000, vol_name:cephfs) < "" Nov 28 05:13:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/028817fc-1877-4dba-8e2f-8daf5bd38000'' moved to trashcan Nov 28 05:13:01 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:13:01 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:028817fc-1877-4dba-8e2f-8daf5bd38000, vol_name:cephfs) < "" Nov 28 05:13:01 localhost nova_compute[280168]: 2025-11-28 10:13:01.564 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:01 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v649: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 184 KiB/s wr, 10 op/s Nov 28 05:13:02 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:13:02 localhost podman[324034]: 2025-11-28 10:13:02.971736191 +0000 UTC m=+0.081611240 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 28 05:13:02 localhost podman[324034]: 2025-11-28 10:13:02.987509598 +0000 UTC m=+0.097384637 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9) Nov 28 05:13:03 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:13:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:13:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:13:03 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:03 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:13:03 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:13:03.290 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:13:03Z, description=, device_id=c18f45d1-0864-4b21-9d36-6aae15733137, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=f47899ca-bb53-42d6-a1d1-6396c431c982, ip_allocation=immediate, mac_address=fa:16:3e:82:5d:ad, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3819, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:13:03Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:13:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:13:03 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:03 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 3 addresses Nov 28 05:13:03 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:13:03 localhost podman[324071]: 2025-11-28 10:13:03.547137293 +0000 UTC m=+0.058132355 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:13:03 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:13:03 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v650: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 135 KiB/s wr, 7 op/s Nov 28 05:13:04 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:13:04.031 261346 INFO neutron.agent.dhcp.agent [None req-f226837e-eb5c-487b-9d6b-2744578ba75a - - - - - -] DHCP configuration for ports {'f47899ca-bb53-42d6-a1d1-6396c431c982'} is completed#033[00m Nov 28 05:13:04 localhost nova_compute[280168]: 2025-11-28 10:13:04.048 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:04 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:04 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:04 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:04 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:13:04 localhost nova_compute[280168]: 2025-11-28 10:13:04.378 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:13:05 Nov 28 05:13:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:13:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:13:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['images', 'vms', '.mgr', 'backups', 'manila_metadata', 'manila_data', 'volumes'] Nov 28 05:13:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:13:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:13:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:13:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:13:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:13:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:13:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:13:05 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 135 KiB/s wr, 7 op/s Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32) Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:13:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002151327383445003 of space, bias 4.0, pg target 1.7124565972222223 quantized to 16 (current 16) Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:13:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:13:06 localhost nova_compute[280168]: 2025-11-28 10:13:06.603 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:13:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:13:07 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:07 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:13:07 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:13:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:13:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:13:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:13:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:13:07 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:07 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:13:07 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:13:07 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 167 KiB/s wr, 10 op/s Nov 28 05:13:09 localhost nova_compute[280168]: 2025-11-28 10:13:09.083 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:09 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v653: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 96 KiB/s wr, 5 op/s Nov 28 05:13:11 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "r", "format": "json"}]: dispatch Nov 28 05:13:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:13:11 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:11 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID alice bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:13:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:13:11 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:11 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:11 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:11 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:11 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:11 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow r pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:13:11 localhost nova_compute[280168]: 2025-11-28 10:13:11.638 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:11 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 159 KiB/s wr, 8 op/s Nov 28 05:13:13 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v655: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 94 KiB/s wr, 6 op/s Nov 28 05:13:14 localhost nova_compute[280168]: 2025-11-28 10:13:14.118 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:15 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 94 KiB/s wr, 6 op/s Nov 28 05:13:16 localhost nova_compute[280168]: 2025-11-28 10:13:16.676 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:13:16 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:13:16 localhost podman[324093]: 2025-11-28 10:13:16.981115285 +0000 UTC m=+0.082782436 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:13:17 localhost podman[324093]: 2025-11-28 10:13:17.040008184 +0000 UTC m=+0.141675315 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2) Nov 28 05:13:17 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:13:17 localhost podman[324092]: 2025-11-28 10:13:17.054685477 +0000 UTC m=+0.159848685 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:13:17 localhost podman[324099]: 2025-11-28 10:13:17.063306203 +0000 UTC m=+0.155163441 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:13:17 localhost podman[324099]: 2025-11-28 10:13:17.071531497 +0000 UTC m=+0.163388775 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:13:17 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:13:17 localhost podman[324092]: 2025-11-28 10:13:17.086904841 +0000 UTC m=+0.192067989 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:13:17 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:13:17 localhost podman[324094]: 2025-11-28 10:13:17.146096058 +0000 UTC m=+0.243807957 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125) Nov 28 05:13:17 localhost podman[324094]: 2025-11-28 10:13:17.150247287 +0000 UTC m=+0.247959166 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS) Nov 28 05:13:17 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:13:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:13:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0) Nov 28 05:13:17 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0) Nov 28 05:13:17 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:13:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:17 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "alice bob", "format": "json"}]: dispatch Nov 28 05:13:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:13:17 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:13:17 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:17 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch Nov 28 05:13:17 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:13:17 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch Nov 28 05:13:17 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished Nov 28 05:13:17 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 96 KiB/s wr, 6 op/s Nov 28 05:13:18 localhost nova_compute[280168]: 2025-11-28 10:13:18.752 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:19 localhost nova_compute[280168]: 2025-11-28 10:13:19.160 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:13:19 localhost podman[324177]: 2025-11-28 10:13:19.971101286 +0000 UTC m=+0.076348208 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:13:19 localhost podman[324177]: 2025-11-28 10:13:19.983393255 +0000 UTC m=+0.088640167 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:13:19 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v658: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 65 KiB/s wr, 3 op/s Nov 28 05:13:19 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:13:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:13:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 28 05:13:20 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:20 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID bob with tenant 38de2f991c8946e4ad86ddc6b9c2ae73 Nov 28 05:13:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:13:20 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:20 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:20 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:13:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:13:21 localhost nova_compute[280168]: 2025-11-28 10:13:21.711 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:21 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v659: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 128 KiB/s wr, 7 op/s Nov 28 05:13:23 localhost nova_compute[280168]: 2025-11-28 10:13:23.344 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:23 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:13:23 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:13:23 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:13:23 localhost podman[324219]: 2025-11-28 10:13:23.380558304 +0000 UTC m=+0.058361212 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:13:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:13:23 localhost podman[324233]: 2025-11-28 10:13:23.500220899 +0000 UTC m=+0.088042350 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:13:23 localhost podman[324233]: 2025-11-28 10:13:23.514580932 +0000 UTC m=+0.102402433 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}) Nov 28 05:13:23 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:13:23 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 65 KiB/s wr, 4 op/s Nov 28 05:13:24 localhost nova_compute[280168]: 2025-11-28 10:13:24.187 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:24 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1f8e6c04-5771-4f46-846b-71f913803117", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:13:24 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/.meta.tmp' Nov 28 05:13:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/.meta.tmp' to config b'/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/.meta' Nov 28 05:13:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1f8e6c04-5771-4f46-846b-71f913803117", "format": "json"}]: dispatch Nov 28 05:13:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:25 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v661: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 65 KiB/s wr, 4 op/s Nov 28 05:13:26 localhost nova_compute[280168]: 2025-11-28 10:13:26.562 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:26 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:13:26 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:13:26 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:13:26 localhost podman[324273]: 2025-11-28 10:13:26.563854932 +0000 UTC m=+0.093613861 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:13:26 localhost nova_compute[280168]: 2025-11-28 10:13:26.714 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:27 localhost openstack_network_exporter[240973]: ERROR 10:13:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:13:27 localhost openstack_network_exporter[240973]: ERROR 10:13:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:13:27 localhost openstack_network_exporter[240973]: ERROR 10:13:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:13:27 localhost openstack_network_exporter[240973]: ERROR 10:13:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:13:27 localhost openstack_network_exporter[240973]: Nov 28 05:13:27 localhost openstack_network_exporter[240973]: ERROR 10:13:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:13:27 localhost openstack_network_exporter[240973]: Nov 28 05:13:27 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 79 KiB/s wr, 5 op/s Nov 28 05:13:28 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "1f8e6c04-5771-4f46-846b-71f913803117", "auth_id": "bob", "tenant_id": "38de2f991c8946e4ad86ddc6b9c2ae73", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:13:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 28 05:13:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97,allow rw path=/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/99d1667a-36eb-45db-8990-56f5fc443d2e", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50,allow rw pool=manila_data namespace=fsvolumens_1f8e6c04-5771-4f46-846b-71f913803117"]} v 0) Nov 28 05:13:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97,allow rw path=/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/99d1667a-36eb-45db-8990-56f5fc443d2e", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50,allow rw pool=manila_data namespace=fsvolumens_1f8e6c04-5771-4f46-846b-71f913803117"]} : dispatch Nov 28 05:13:28 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 28 05:13:28 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:28 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, tenant_id:38de2f991c8946e4ad86ddc6b9c2ae73, vol_name:cephfs) < "" Nov 28 05:13:28 localhost podman[239012]: time="2025-11-28T10:13:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:13:28 localhost podman[239012]: @ - - [28/Nov/2025:10:13:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:13:28 localhost podman[239012]: @ - - [28/Nov/2025:10:13:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19247 "" "Go-http-client/1.1" Nov 28 05:13:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:29 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97,allow rw path=/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/99d1667a-36eb-45db-8990-56f5fc443d2e", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50,allow rw pool=manila_data namespace=fsvolumens_1f8e6c04-5771-4f46-846b-71f913803117"]} : dispatch Nov 28 05:13:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97,allow rw path=/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/99d1667a-36eb-45db-8990-56f5fc443d2e", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50,allow rw pool=manila_data namespace=fsvolumens_1f8e6c04-5771-4f46-846b-71f913803117"]} : dispatch Nov 28 05:13:29 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97,allow rw path=/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/99d1667a-36eb-45db-8990-56f5fc443d2e", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50,allow rw pool=manila_data namespace=fsvolumens_1f8e6c04-5771-4f46-846b-71f913803117"]}]': finished Nov 28 05:13:29 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:29 localhost nova_compute[280168]: 2025-11-28 10:13:29.213 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:29 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v663: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 77 KiB/s wr, 4 op/s Nov 28 05:13:31 localhost nova_compute[280168]: 2025-11-28 10:13:31.749 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:31 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 112 KiB/s wr, 6 op/s Nov 28 05:13:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "1f8e6c04-5771-4f46-846b-71f913803117", "auth_id": "bob", "format": "json"}]: dispatch Nov 28 05:13:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 28 05:13:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:32 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50"]} v 0) Nov 28 05:13:32 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50"]} : dispatch Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0. Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.682585) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324812682692, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2594, "num_deletes": 259, "total_data_size": 3815636, "memory_usage": 3876696, "flush_reason": "Manual Compaction"} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324812705459, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 2498832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30476, "largest_seqno": 33065, "table_properties": {"data_size": 2488792, "index_size": 6097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25107, "raw_average_key_size": 22, "raw_value_size": 2467168, "raw_average_value_size": 2175, "num_data_blocks": 263, "num_entries": 1134, "num_filter_entries": 1134, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324696, "oldest_key_time": 1764324696, "file_creation_time": 1764324812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 22927 microseconds, and 8378 cpu microseconds. Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.705526) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 2498832 bytes OK Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.705556) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.707885) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.707909) EVENT_LOG_v1 {"time_micros": 1764324812707902, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.707933) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3803279, prev total WAL file size 3803279, number of live WAL files 2. Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.708990) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end) Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(2440KB)], [48(17MB)] Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324812709040, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 21214289, "oldest_snapshot_seqno": -1} Nov 28 05:13:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "1f8e6c04-5771-4f46-846b-71f913803117", "auth_id": "bob", "format": "json"}]: dispatch Nov 28 05:13:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117/99d1667a-36eb-45db-8990-56f5fc443d2e Nov 28 05:13:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:13:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 14367 keys, 19606996 bytes, temperature: kUnknown Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324812845176, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 19606996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19522778, "index_size": 47189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35973, "raw_key_size": 383945, "raw_average_key_size": 26, "raw_value_size": 19276939, "raw_average_value_size": 1341, "num_data_blocks": 1771, "num_entries": 14367, "num_filter_entries": 14367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324812, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.845529) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 19606996 bytes Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.847682) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 143.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 17.8 +0.0 blob) out(18.7 +0.0 blob), read-write-amplify(16.3) write-amplify(7.8) OK, records in: 14908, records dropped: 541 output_compression: NoCompression Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.847715) EVENT_LOG_v1 {"time_micros": 1764324812847698, "job": 28, "event": "compaction_finished", "compaction_time_micros": 136271, "compaction_time_cpu_micros": 50852, "output_level": 6, "num_output_files": 1, "total_output_size": 19606996, "num_input_records": 14908, "num_output_records": 14367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324812848318, "job": 28, "event": "table_file_deletion", "file_number": 50} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324812851209, "job": 28, "event": "table_file_deletion", "file_number": 48} Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.708929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.851307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.851313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.851316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.851320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:13:32 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:13:32.851322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:13:32 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50"]} : dispatch Nov 28 05:13:32 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:32 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50"]} : dispatch Nov 28 05:13:32 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97", "osd", "allow rw pool=manila_data namespace=fsvolumens_3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50"]}]': finished Nov 28 05:13:33 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:13:33 localhost podman[324312]: 2025-11-28 10:13:33.83016436 +0000 UTC m=+0.082148286 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm) Nov 28 05:13:33 localhost podman[324312]: 2025-11-28 10:13:33.843541233 +0000 UTC m=+0.095525169 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter) Nov 28 05:13:33 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:13:33 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 49 KiB/s wr, 2 op/s Nov 28 05:13:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain.devices.0}] v 0) Nov 28 05:13:34 localhost nova_compute[280168]: 2025-11-28 10:13:34.249 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538513.localdomain}] v 0) Nov 28 05:13:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain.devices.0}] v 0) Nov 28 05:13:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538514.localdomain}] v 0) Nov 28 05:13:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain.devices.0}] v 0) Nov 28 05:13:34 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005538515.localdomain}] v 0) Nov 28 05:13:35 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:35 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:35 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:35 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:35 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:35 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:13:35 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:13:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:13:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:13:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:13:35 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 36881bde-4e60-4842-9021-b86e61a7a555 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:13:35 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 36881bde-4e60-4842-9021-b86e61a7a555 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:13:35 localhost ceph-mgr[286188]: [progress INFO root] Completed event 36881bde-4e60-4842-9021-b86e61a7a555 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:13:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:13:35 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:13:35 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:13:35.835 261346 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-11-28T10:13:35Z, description=, device_id=b0257272-76a6-44da-9e3f-446bfab91a2f, device_owner=network:router_gateway, dns_assignment=[], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[], id=d0e1c2d7-bdcb-42a0-9bb5-3966853464ac, ip_allocation=immediate, mac_address=fa:16:3e:b0:0a:c9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-11-28T08:32:19Z, description=, dns_domain=, id=887157f9-a765-40c0-8be5-1fba3ddea8f8, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=9dda653c53224db086060962b0702694, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['5f7de60c-f82a-4f40-b803-51cb08cbf2e3'], tags=[], tenant_id=9dda653c53224db086060962b0702694, updated_at=2025-11-28T08:32:25Z, vlan_transparent=None, network_id=887157f9-a765-40c0-8be5-1fba3ddea8f8, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3862, status=DOWN, tags=[], tenant_id=, updated_at=2025-11-28T10:13:35Z on network 887157f9-a765-40c0-8be5-1fba3ddea8f8#033[00m Nov 28 05:13:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "bob", "format": "json"}]: dispatch Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:13:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0) Nov 28 05:13:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:35 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0) Nov 28 05:13:35 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:35 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "auth_id": "bob", "format": "json"}]: dispatch Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50/21395b47-9479-472b-88c4-193165317a97 Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:13:35 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:35 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 49 KiB/s wr, 2 op/s Nov 28 05:13:36 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 2 addresses Nov 28 05:13:36 localhost podman[324473]: 2025-11-28 10:13:36.024680934 +0000 UTC m=+0.050301844 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2) Nov 28 05:13:36 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:13:36 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:13:36 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:13:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished Nov 28 05:13:36 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:13:36 localhost neutron_dhcp_agent[261342]: 2025-11-28 10:13:36.294 261346 INFO neutron.agent.dhcp.agent [None req-e8d73316-c6da-412d-abde-99448150d6b0 - - - - - -] DHCP configuration for ports {'d0e1c2d7-bdcb-42a0-9bb5-3966853464ac'} is completed#033[00m Nov 28 05:13:36 localhost nova_compute[280168]: 2025-11-28 10:13:36.555 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:36 localhost nova_compute[280168]: 2025-11-28 10:13:36.752 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:13:37.641 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:13:37 localhost ovn_metadata_agent[158525]: 2025-11-28 10:13:37.642 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:13:37 localhost nova_compute[280168]: 2025-11-28 10:13:37.674 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:37 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v667: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 101 KiB/s wr, 5 op/s Nov 28 05:13:39 localhost nova_compute[280168]: 2025-11-28 10:13:39.279 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:39 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v668: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 87 KiB/s wr, 4 op/s Nov 28 05:13:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1f8e6c04-5771-4f46-846b-71f913803117", "format": "json"}]: dispatch Nov 28 05:13:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1f8e6c04-5771-4f46-846b-71f913803117, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:13:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1f8e6c04-5771-4f46-846b-71f913803117, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:13:40 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:13:40.079+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1f8e6c04-5771-4f46-846b-71f913803117' of type subvolume Nov 28 05:13:40 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1f8e6c04-5771-4f46-846b-71f913803117' of type subvolume Nov 28 05:13:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1f8e6c04-5771-4f46-846b-71f913803117", "force": true, "format": "json"}]: dispatch Nov 28 05:13:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1f8e6c04-5771-4f46-846b-71f913803117'' moved to trashcan Nov 28 05:13:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:13:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1f8e6c04-5771-4f46-846b-71f913803117, vol_name:cephfs) < "" Nov 28 05:13:41 localhost nova_compute[280168]: 2025-11-28 10:13:41.801 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:41 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 114 KiB/s wr, 5 op/s Nov 28 05:13:42 localhost nova_compute[280168]: 2025-11-28 10:13:42.510 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "format": "json"}]: dispatch Nov 28 05:13:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:13:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:13:43 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:13:43.252+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50' of type subvolume Nov 28 05:13:43 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50' of type subvolume Nov 28 05:13:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50", "force": true, "format": "json"}]: dispatch Nov 28 05:13:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50'' moved to trashcan Nov 28 05:13:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:13:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3592a7eb-5b6e-4ca3-b6c3-e13b81ee2f50, vol_name:cephfs) < "" Nov 28 05:13:43 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v670: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s wr, 3 op/s Nov 28 05:13:44 localhost nova_compute[280168]: 2025-11-28 10:13:44.282 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:45 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s wr, 3 op/s Nov 28 05:13:46 localhost nova_compute[280168]: 2025-11-28 10:13:46.833 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:47 localhost dnsmasq[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/addn_hosts - 1 addresses Nov 28 05:13:47 localhost podman[324513]: 2025-11-28 10:13:47.116758784 +0000 UTC m=+0.054487764 container kill d6ba70e100412f39fbf43f2359746885ccd151dd93bddc3682d2651a49a36c62 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-887157f9-a765-40c0-8be5-1fba3ddea8f8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:13:47 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/host Nov 28 05:13:47 localhost dnsmasq-dhcp[310862]: read /var/lib/neutron/dhcp/887157f9-a765-40c0-8be5-1fba3ddea8f8/opts Nov 28 05:13:47 localhost nova_compute[280168]: 2025-11-28 10:13:47.125 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:13:47 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:13:47 localhost systemd[1]: tmp-crun.ErMjot.mount: Deactivated successfully. Nov 28 05:13:47 localhost podman[324529]: 2025-11-28 10:13:47.263485483 +0000 UTC m=+0.111724590 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:13:47 localhost podman[324529]: 2025-11-28 10:13:47.294985955 +0000 UTC m=+0.143225072 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3) Nov 28 05:13:47 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:13:47 localhost podman[324530]: 2025-11-28 10:13:47.308211123 +0000 UTC m=+0.153180529 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:13:47 localhost podman[324530]: 2025-11-28 10:13:47.31295542 +0000 UTC m=+0.157924816 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:13:47 localhost podman[324528]: 2025-11-28 10:13:47.221415964 +0000 UTC m=+0.074813260 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS) Nov 28 05:13:47 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:13:47 localhost podman[324567]: 2025-11-28 10:13:47.377798952 +0000 UTC m=+0.137015951 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:13:47 localhost podman[324528]: 2025-11-28 10:13:47.402314229 +0000 UTC m=+0.255711535 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0) Nov 28 05:13:47 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:13:47 localhost podman[324567]: 2025-11-28 10:13:47.456701597 +0000 UTC m=+0.215918606 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:13:47 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:13:47 localhost ovn_metadata_agent[158525]: 2025-11-28 10:13:47.644 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:13:48 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 112 KiB/s wr, 5 op/s Nov 28 05:13:49 localhost nova_compute[280168]: 2025-11-28 10:13:49.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:49 localhost nova_compute[280168]: 2025-11-28 10:13:49.286 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:50 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 2 op/s Nov 28 05:13:50 localhost nova_compute[280168]: 2025-11-28 10:13:50.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:13:50.856 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:13:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:13:50.857 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:13:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:13:50.857 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:13:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:13:50 localhost systemd[1]: tmp-crun.l8XfvL.mount: Deactivated successfully. Nov 28 05:13:50 localhost podman[324614]: 2025-11-28 10:13:50.974110879 +0000 UTC m=+0.074573593 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:13:50 localhost podman[324614]: 2025-11-28 10:13:50.983321663 +0000 UTC m=+0.083784357 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:13:50 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:13:51 localhost nova_compute[280168]: 2025-11-28 10:13:51.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:51 localhost nova_compute[280168]: 2025-11-28 10:13:51.866 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:52 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 3 op/s Nov 28 05:13:52 localhost nova_compute[280168]: 2025-11-28 10:13:52.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:52 localhost nova_compute[280168]: 2025-11-28 10:13:52.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:13:53 localhost nova_compute[280168]: 2025-11-28 10:13:53.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:53 localhost nova_compute[280168]: 2025-11-28 10:13:53.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:13:53 localhost nova_compute[280168]: 2025-11-28 10:13:53.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:13:53 localhost nova_compute[280168]: 2025-11-28 10:13:53.375 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:13:53 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:13:53 localhost podman[324637]: 2025-11-28 10:13:53.971527018 +0000 UTC m=+0.082807558 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:13:53 localhost podman[324637]: 2025-11-28 10:13:53.986584103 +0000 UTC m=+0.097864633 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd) Nov 28 05:13:54 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:13:54 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v675: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s Nov 28 05:13:54 localhost nova_compute[280168]: 2025-11-28 10:13:54.290 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:54 localhost nova_compute[280168]: 2025-11-28 10:13:54.370 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:55 localhost nova_compute[280168]: 2025-11-28 10:13:55.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:55 localhost nova_compute[280168]: 2025-11-28 10:13:55.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:13:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:13:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/.meta.tmp' Nov 28 05:13:55 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/.meta.tmp' to config b'/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/.meta' Nov 28 05:13:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:13:55 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "format": "json"}]: dispatch Nov 28 05:13:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:13:55 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:13:56 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 33 KiB/s wr, 2 op/s Nov 28 05:13:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:13:56 localhost nova_compute[280168]: 2025-11-28 10:13:56.905 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.269 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.270 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.270 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.270 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.271 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:13:57 localhost openstack_network_exporter[240973]: ERROR 10:13:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:13:57 localhost openstack_network_exporter[240973]: ERROR 10:13:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:13:57 localhost openstack_network_exporter[240973]: ERROR 10:13:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:13:57 localhost openstack_network_exporter[240973]: ERROR 10:13:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:13:57 localhost openstack_network_exporter[240973]: Nov 28 05:13:57 localhost openstack_network_exporter[240973]: ERROR 10:13:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:13:57 localhost openstack_network_exporter[240973]: Nov 28 05:13:57 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:13:57 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2582566830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.673 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.890 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.892 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11454MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.892 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.893 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.997 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:13:57 localhost nova_compute[280168]: 2025-11-28 10:13:57.998 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:13:58 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 46 KiB/s wr, 3 op/s Nov 28 05:13:58 localhost nova_compute[280168]: 2025-11-28 10:13:58.015 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:13:58 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:13:58 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1398638247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:13:58 localhost nova_compute[280168]: 2025-11-28 10:13:58.418 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:13:58 localhost nova_compute[280168]: 2025-11-28 10:13:58.424 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:13:58 localhost nova_compute[280168]: 2025-11-28 10:13:58.441 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:13:58 localhost nova_compute[280168]: 2025-11-28 10:13:58.444 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:13:58 localhost nova_compute[280168]: 2025-11-28 10:13:58.445 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:13:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:13:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:13:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/.meta.tmp' Nov 28 05:13:58 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/.meta.tmp' to config b'/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/.meta' Nov 28 05:13:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:13:58 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "format": "json"}]: dispatch Nov 28 05:13:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:13:58 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:13:58 localhost podman[239012]: time="2025-11-28T10:13:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:13:58 localhost podman[239012]: @ - - [28/Nov/2025:10:13:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:13:58 localhost podman[239012]: @ - - [28/Nov/2025:10:13:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19259 "" "Go-http-client/1.1" Nov 28 05:13:59 localhost nova_compute[280168]: 2025-11-28 10:13:59.332 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:13:59 localhost nova_compute[280168]: 2025-11-28 10:13:59.446 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:00 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:14:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:14:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:01 localhost nova_compute[280168]: 2025-11-28 10:14:01.957 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:02 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s wr, 1 op/s Nov 28 05:14:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "auth_id": "Joe", "tenant_id": "301971e834c14ea7aa009696c3f04782", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:14:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, tenant_id:301971e834c14ea7aa009696c3f04782, vol_name:cephfs) < "" Nov 28 05:14:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Nov 28 05:14:03 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 28 05:14:03 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID Joe with tenant 301971e834c14ea7aa009696c3f04782 Nov 28 05:14:03 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/a35af86e-1bef-43ca-805f-c714f40e8411", "osd", "allow rw pool=manila_data namespace=fsvolumens_8d719993-3b66-454f-a026-687de7e6b3e4", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:14:03 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/a35af86e-1bef-43ca-805f-c714f40e8411", "osd", "allow rw pool=manila_data namespace=fsvolumens_8d719993-3b66-454f-a026-687de7e6b3e4", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, tenant_id:301971e834c14ea7aa009696c3f04782, vol_name:cephfs) < "" Nov 28 05:14:03 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:14:03 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 28 05:14:03 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/a35af86e-1bef-43ca-805f-c714f40e8411", "osd", "allow rw pool=manila_data namespace=fsvolumens_8d719993-3b66-454f-a026-687de7e6b3e4", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:03 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/a35af86e-1bef-43ca-805f-c714f40e8411", "osd", "allow rw pool=manila_data namespace=fsvolumens_8d719993-3b66-454f-a026-687de7e6b3e4", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:03 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/a35af86e-1bef-43ca-805f-c714f40e8411", "osd", "allow rw pool=manila_data namespace=fsvolumens_8d719993-3b66-454f-a026-687de7e6b3e4", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:14:03 localhost systemd[1]: tmp-crun.VeoBdq.mount: Deactivated successfully. Nov 28 05:14:03 localhost podman[324702]: 2025-11-28 10:14:03.980386459 +0000 UTC m=+0.088075841 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b) Nov 28 05:14:03 localhost podman[324702]: 2025-11-28 10:14:03.992440451 +0000 UTC m=+0.100129863 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI) Nov 28 05:14:04 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:14:04 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s Nov 28 05:14:04 localhost nova_compute[280168]: 2025-11-28 10:14:04.367 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:14:05 Nov 28 05:14:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:14:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:14:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['volumes', 'vms', 'manila_data', '.mgr', 'backups', 'manila_metadata', 'images'] Nov 28 05:14:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:14:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:14:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:14:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:14:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:14:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:14:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:14:06 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v681: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s wr, 1 op/s Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32) Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:14:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002426956832774986 of space, bias 4.0, pg target 1.931857638888889 quantized to 16 (current 16) Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:14:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:14:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:07 localhost nova_compute[280168]: 2025-11-28 10:14:07.017 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:14:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/.meta.tmp' Nov 28 05:14:07 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/.meta.tmp' to config b'/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/.meta' Nov 28 05:14:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:07 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "format": "json"}]: dispatch Nov 28 05:14:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:07 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:08 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s Nov 28 05:14:09 localhost nova_compute[280168]: 2025-11-28 10:14:09.377 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:10 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s wr, 2 op/s Nov 28 05:14:10 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "auth_id": "Joe", "tenant_id": "b90c445933704341b38d135548fb5388", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:14:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, tenant_id:b90c445933704341b38d135548fb5388, vol_name:cephfs) < "" Nov 28 05:14:10 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Nov 28 05:14:10 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 28 05:14:10 localhost ceph-mgr[286188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use Nov 28 05:14:10 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, tenant_id:b90c445933704341b38d135548fb5388, vol_name:cephfs) < "" Nov 28 05:14:10 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:10.410+0000 7fcc87448640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Nov 28 05:14:10 localhost ceph-mgr[286188]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use Nov 28 05:14:11 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 28 05:14:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:12 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 75 KiB/s wr, 3 op/s Nov 28 05:14:12 localhost nova_compute[280168]: 2025-11-28 10:14:12.061 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:14:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4282388622' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:14:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:14:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4282388622' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:14:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "auth_id": "tempest-cephx-id-241168775", "tenant_id": "b90c445933704341b38d135548fb5388", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:14:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-241168775, format:json, prefix:fs subvolume authorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, tenant_id:b90c445933704341b38d135548fb5388, vol_name:cephfs) < "" Nov 28 05:14:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-241168775", "format": "json"} v 0) Nov 28 05:14:13 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-241168775", "format": "json"} : dispatch Nov 28 05:14:13 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID tempest-cephx-id-241168775 with tenant b90c445933704341b38d135548fb5388 Nov 28 05:14:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-241168775", "caps": ["mds", "allow rw path=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a", "osd", "allow rw pool=manila_data namespace=fsvolumens_d56bb3f2-efa0-4328-9320-c5298bccaeb7", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:14:13 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-241168775", "caps": ["mds", "allow rw path=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a", "osd", "allow rw pool=manila_data namespace=fsvolumens_d56bb3f2-efa0-4328-9320-c5298bccaeb7", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-241168775, format:json, prefix:fs subvolume authorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, tenant_id:b90c445933704341b38d135548fb5388, vol_name:cephfs) < "" Nov 28 05:14:13 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-241168775", "format": "json"} : dispatch Nov 28 05:14:13 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-241168775", "caps": ["mds", "allow rw path=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a", "osd", "allow rw pool=manila_data namespace=fsvolumens_d56bb3f2-efa0-4328-9320-c5298bccaeb7", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:13 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-241168775", "caps": ["mds", "allow rw path=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a", "osd", "allow rw pool=manila_data namespace=fsvolumens_d56bb3f2-efa0-4328-9320-c5298bccaeb7", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:13 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-241168775", "caps": ["mds", "allow rw path=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a", "osd", "allow rw pool=manila_data namespace=fsvolumens_d56bb3f2-efa0-4328-9320-c5298bccaeb7", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:14:14 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s wr, 2 op/s Nov 28 05:14:14 localhost nova_compute[280168]: 2025-11-28 10:14:14.404 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:16 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s wr, 2 op/s Nov 28 05:14:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "auth_id": "Joe", "format": "json"}]: dispatch Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'd56bb3f2-efa0-4328-9320-c5298bccaeb7' Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "auth_id": "Joe", "format": "json"}]: dispatch Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:14:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:17 localhost nova_compute[280168]: 2025-11-28 10:14:17.096 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:14:17 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:14:17 localhost podman[324722]: 2025-11-28 10:14:17.992714175 +0000 UTC m=+0.095927492 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 05:14:18 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v687: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s wr, 3 op/s Nov 28 05:14:18 localhost podman[324722]: 2025-11-28 10:14:18.026925532 +0000 UTC m=+0.130138839 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:14:18 localhost systemd[1]: tmp-crun.K0wRry.mount: Deactivated successfully. Nov 28 05:14:18 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:14:18 localhost podman[324724]: 2025-11-28 10:14:18.050449628 +0000 UTC m=+0.144314106 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:14:18 localhost podman[324724]: 2025-11-28 10:14:18.07935682 +0000 UTC m=+0.173221298 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:14:18 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:14:18 localhost podman[324723]: 2025-11-28 10:14:18.098040127 +0000 UTC m=+0.194412602 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:14:18 localhost podman[324730]: 2025-11-28 10:14:18.179828451 +0000 UTC m=+0.267182218 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:14:18 localhost podman[324730]: 2025-11-28 10:14:18.18821623 +0000 UTC m=+0.275570037 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:14:18 localhost podman[324723]: 2025-11-28 10:14:18.19725735 +0000 UTC m=+0.293629855 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:14:18 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:14:18 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:14:18 localhost ovn_controller[152726]: 2025-11-28T10:14:18Z|00196|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory Nov 28 05:14:19 localhost nova_compute[280168]: 2025-11-28 10:14:19.439 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:20 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 227 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 55 KiB/s wr, 2 op/s Nov 28 05:14:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "auth_id": "tempest-cephx-id-241168775", "format": "json"}]: dispatch Nov 28 05:14:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-241168775, format:json, prefix:fs subvolume deauthorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-241168775", "format": "json"} v 0) Nov 28 05:14:20 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-241168775", "format": "json"} : dispatch Nov 28 05:14:20 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-241168775"} v 0) Nov 28 05:14:20 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-241168775"} : dispatch Nov 28 05:14:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-241168775, format:json, prefix:fs subvolume deauthorize, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "auth_id": "tempest-cephx-id-241168775", "format": "json"}]: dispatch Nov 28 05:14:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-241168775, format:json, prefix:fs subvolume evict, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-241168775, client_metadata.root=/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7/3fd14072-fd4e-434b-b433-15cdcb82070a Nov 28 05:14:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:14:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-241168775, format:json, prefix:fs subvolume evict, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-241168775"} : dispatch Nov 28 05:14:21 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-241168775", "format": "json"} : dispatch Nov 28 05:14:21 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-241168775"} : dispatch Nov 28 05:14:21 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-241168775"}]': finished Nov 28 05:14:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:14:21 localhost podman[324807]: 2025-11-28 10:14:21.979821577 +0000 UTC m=+0.085810160 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:14:21 localhost podman[324807]: 2025-11-28 10:14:21.98478344 +0000 UTC m=+0.090772063 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:14:21 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:14:22 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s wr, 4 op/s Nov 28 05:14:22 localhost nova_compute[280168]: 2025-11-28 10:14:22.116 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "auth_id": "Joe", "format": "json"}]: dispatch Nov 28 05:14:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:14:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0) Nov 28 05:14:23 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 28 05:14:23 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0) Nov 28 05:14:23 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Nov 28 05:14:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:14:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "auth_id": "Joe", "format": "json"}]: dispatch Nov 28 05:14:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:14:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4/a35af86e-1bef-43ca-805f-c714f40e8411 Nov 28 05:14:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:14:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:14:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Nov 28 05:14:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch Nov 28 05:14:23 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch Nov 28 05:14:23 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished Nov 28 05:14:24 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s wr, 3 op/s Nov 28 05:14:24 localhost nova_compute[280168]: 2025-11-28 10:14:24.470 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:24 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:14:24 localhost systemd[1]: tmp-crun.ERSGU6.mount: Deactivated successfully. Nov 28 05:14:24 localhost podman[324831]: 2025-11-28 10:14:24.982646944 +0000 UTC m=+0.091533888 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd) Nov 28 05:14:24 localhost podman[324831]: 2025-11-28 10:14:24.998599645 +0000 UTC m=+0.107486589 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 05:14:25 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:14:26 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 64 KiB/s wr, 3 op/s Nov 28 05:14:26 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "auth_id": "admin", "tenant_id": "301971e834c14ea7aa009696c3f04782", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:14:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, tenant_id:301971e834c14ea7aa009696c3f04782, vol_name:cephfs) < "" Nov 28 05:14:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0) Nov 28 05:14:26 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Nov 28 05:14:26 localhost ceph-mgr[286188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify Nov 28 05:14:26 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, tenant_id:301971e834c14ea7aa009696c3f04782, vol_name:cephfs) < "" Nov 28 05:14:26 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:26.878+0000 7fcc87448640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Nov 28 05:14:26 localhost ceph-mgr[286188]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify Nov 28 05:14:27 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch Nov 28 05:14:27 localhost nova_compute[280168]: 2025-11-28 10:14:27.157 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:27 localhost openstack_network_exporter[240973]: ERROR 10:14:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:14:27 localhost openstack_network_exporter[240973]: ERROR 10:14:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:14:27 localhost openstack_network_exporter[240973]: ERROR 10:14:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:14:27 localhost openstack_network_exporter[240973]: ERROR 10:14:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:14:27 localhost openstack_network_exporter[240973]: Nov 28 05:14:27 localhost openstack_network_exporter[240973]: ERROR 10:14:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:14:27 localhost openstack_network_exporter[240973]: Nov 28 05:14:28 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 100 KiB/s wr, 5 op/s Nov 28 05:14:28 localhost podman[239012]: time="2025-11-28T10:14:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:14:28 localhost podman[239012]: @ - - [28/Nov/2025:10:14:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:14:28 localhost podman[239012]: @ - - [28/Nov/2025:10:14:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19251 "" "Go-http-client/1.1" Nov 28 05:14:29 localhost nova_compute[280168]: 2025-11-28 10:14:29.501 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:30 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 228 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 67 KiB/s wr, 3 op/s Nov 28 05:14:30 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "auth_id": "david", "tenant_id": "301971e834c14ea7aa009696c3f04782", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:14:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, tenant_id:301971e834c14ea7aa009696c3f04782, vol_name:cephfs) < "" Nov 28 05:14:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Nov 28 05:14:30 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 28 05:14:30 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: Creating meta for ID david with tenant 301971e834c14ea7aa009696c3f04782 Nov 28 05:14:30 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/471f0b39-a16d-4e49-a0e3-afb597cde17a", "osd", "allow rw pool=manila_data namespace=fsvolumens_4c25470d-c14c-4093-b430-b79c735aaf06", "mon", "allow r"], "format": "json"} v 0) Nov 28 05:14:30 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/471f0b39-a16d-4e49-a0e3-afb597cde17a", "osd", "allow rw pool=manila_data namespace=fsvolumens_4c25470d-c14c-4093-b430-b79c735aaf06", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:30 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, tenant_id:301971e834c14ea7aa009696c3f04782, vol_name:cephfs) < "" Nov 28 05:14:31 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 28 05:14:31 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/471f0b39-a16d-4e49-a0e3-afb597cde17a", "osd", "allow rw pool=manila_data namespace=fsvolumens_4c25470d-c14c-4093-b430-b79c735aaf06", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:31 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/471f0b39-a16d-4e49-a0e3-afb597cde17a", "osd", "allow rw pool=manila_data namespace=fsvolumens_4c25470d-c14c-4093-b430-b79c735aaf06", "mon", "allow r"], "format": "json"} : dispatch Nov 28 05:14:31 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/471f0b39-a16d-4e49-a0e3-afb597cde17a", "osd", "allow rw pool=manila_data namespace=fsvolumens_4c25470d-c14c-4093-b430-b79c735aaf06", "mon", "allow r"], "format": "json"}]': finished Nov 28 05:14:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:32 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 98 KiB/s wr, 5 op/s Nov 28 05:14:32 localhost nova_compute[280168]: 2025-11-28 10:14:32.185 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:33 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:14:33 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:34 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 67 KiB/s wr, 3 op/s Nov 28 05:14:34 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e1e8ed83-707d-47a8-914a-3aa1c73e18ce/.meta.tmp' Nov 28 05:14:34 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e1e8ed83-707d-47a8-914a-3aa1c73e18ce/.meta.tmp' to config b'/volumes/_nogroup/e1e8ed83-707d-47a8-914a-3aa1c73e18ce/.meta' Nov 28 05:14:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:34 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "format": "json"}]: dispatch Nov 28 05:14:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:34 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:34 localhost nova_compute[280168]: 2025-11-28 10:14:34.507 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:34 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:14:34 localhost podman[324850]: 2025-11-28 10:14:34.973270061 +0000 UTC m=+0.080935080 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}) Nov 28 05:14:34 localhost podman[324850]: 2025-11-28 10:14:34.985294622 +0000 UTC m=+0.092959671 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal) Nov 28 05:14:34 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:14:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:14:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:14:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:14:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:14:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:14:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:14:36 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 67 KiB/s wr, 3 op/s Nov 28 05:14:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:14:36 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:14:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:14:36 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:14:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:14:36 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 488fe40d-168e-47e8-9e59-4eefa7962b43 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:14:36 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 488fe40d-168e-47e8-9e59-4eefa7962b43 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:14:36 localhost ceph-mgr[286188]: [progress INFO root] Completed event 488fe40d-168e-47e8-9e59-4eefa7962b43 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:14:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:14:36 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:14:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "auth_id": "david", "tenant_id": "b90c445933704341b38d135548fb5388", "access_level": "rw", "format": "json"}]: dispatch Nov 28 05:14:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, tenant_id:b90c445933704341b38d135548fb5388, vol_name:cephfs) < "" Nov 28 05:14:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Nov 28 05:14:36 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 28 05:14:36 localhost ceph-mgr[286188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use Nov 28 05:14:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, tenant_id:b90c445933704341b38d135548fb5388, vol_name:cephfs) < "" Nov 28 05:14:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:36.727+0000 7fcc87448640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Nov 28 05:14:36 localhost ceph-mgr[286188]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use Nov 28 05:14:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:37 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:14:37 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:14:37 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 28 05:14:37 localhost nova_compute[280168]: 2025-11-28 10:14:37.220 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:38 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 95 KiB/s wr, 5 op/s Nov 28 05:14:39 localhost nova_compute[280168]: 2025-11-28 10:14:39.537 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:40 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v698: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 59 KiB/s wr, 2 op/s Nov 28 05:14:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "auth_id": "david", "format": "json"}]: dispatch Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume 'e1e8ed83-707d-47a8-914a-3aa1c73e18ce' Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:40 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "auth_id": "david", "format": "json"}]: dispatch Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/e1e8ed83-707d-47a8-914a-3aa1c73e18ce/984304c3-0bfb-45a6-bfeb-644c624ba49e Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:14:40 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:41 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:14:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:14:41 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:14:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:42 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 68 KiB/s wr, 3 op/s Nov 28 05:14:42 localhost nova_compute[280168]: 2025-11-28 10:14:42.260 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "auth_id": "david", "format": "json"}]: dispatch Nov 28 05:14:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:43 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0) Nov 28 05:14:43 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 28 05:14:43 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0) Nov 28 05:14:43 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Nov 28 05:14:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:43 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "auth_id": "david", "format": "json"}]: dispatch Nov 28 05:14:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06/471f0b39-a16d-4e49-a0e3-afb597cde17a Nov 28 05:14:43 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all Nov 28 05:14:43 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:43 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Nov 28 05:14:43 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch Nov 28 05:14:43 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch Nov 28 05:14:43 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished Nov 28 05:14:44 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s wr, 1 op/s Nov 28 05:14:44 localhost nova_compute[280168]: 2025-11-28 10:14:44.540 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:46 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 36 KiB/s wr, 1 op/s Nov 28 05:14:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "format": "json"}]: dispatch Nov 28 05:14:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:46 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:46.834+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1e8ed83-707d-47a8-914a-3aa1c73e18ce' of type subvolume Nov 28 05:14:46 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e1e8ed83-707d-47a8-914a-3aa1c73e18ce' of type subvolume Nov 28 05:14:46 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1e8ed83-707d-47a8-914a-3aa1c73e18ce", "force": true, "format": "json"}]: dispatch Nov 28 05:14:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e1e8ed83-707d-47a8-914a-3aa1c73e18ce'' moved to trashcan Nov 28 05:14:46 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:14:46 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e1e8ed83-707d-47a8-914a-3aa1c73e18ce, vol_name:cephfs) < "" Nov 28 05:14:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:47 localhost nova_compute[280168]: 2025-11-28 10:14:47.290 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:48 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 67 KiB/s wr, 3 op/s Nov 28 05:14:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:14:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:14:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:14:48 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:14:49 localhost podman[324960]: 2025-11-28 10:14:49.002630883 +0000 UTC m=+0.097468701 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3) Nov 28 05:14:49 localhost podman[324960]: 2025-11-28 10:14:49.031834704 +0000 UTC m=+0.126672632 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:14:49 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:14:49 localhost podman[324961]: 2025-11-28 10:14:49.046973882 +0000 UTC m=+0.136289439 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:14:49 localhost podman[324961]: 2025-11-28 10:14:49.055632449 +0000 UTC m=+0.144948056 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:14:49 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:14:49 localhost podman[324958]: 2025-11-28 10:14:49.104954991 +0000 UTC m=+0.200064517 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125) Nov 28 05:14:49 localhost podman[324958]: 2025-11-28 10:14:49.115685453 +0000 UTC m=+0.210794989 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible) Nov 28 05:14:49 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:14:49 localhost podman[324959]: 2025-11-28 10:14:49.209760297 +0000 UTC m=+0.304855853 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2) Nov 28 05:14:49 localhost podman[324959]: 2025-11-28 10:14:49.266458517 +0000 UTC m=+0.361554043 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125) Nov 28 05:14:49 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:14:49 localhost nova_compute[280168]: 2025-11-28 10:14:49.565 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:50 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 229 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s wr, 2 op/s Nov 28 05:14:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "format": "json"}]: dispatch Nov 28 05:14:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:50 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:50.054+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd56bb3f2-efa0-4328-9320-c5298bccaeb7' of type subvolume Nov 28 05:14:50 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd56bb3f2-efa0-4328-9320-c5298bccaeb7' of type subvolume Nov 28 05:14:50 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d56bb3f2-efa0-4328-9320-c5298bccaeb7", "force": true, "format": "json"}]: dispatch Nov 28 05:14:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:50 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d56bb3f2-efa0-4328-9320-c5298bccaeb7'' moved to trashcan Nov 28 05:14:50 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:14:50 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d56bb3f2-efa0-4328-9320-c5298bccaeb7, vol_name:cephfs) < "" Nov 28 05:14:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:14:50.857 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:14:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:14:50.858 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:14:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:14:50.858 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:14:51 localhost nova_compute[280168]: 2025-11-28 10:14:51.237 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:52 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v704: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 73 KiB/s wr, 3 op/s Nov 28 05:14:52 localhost nova_compute[280168]: 2025-11-28 10:14:52.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:52 localhost nova_compute[280168]: 2025-11-28 10:14:52.340 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:14:52 localhost podman[325041]: 2025-11-28 10:14:52.979487108 +0000 UTC m=+0.084213172 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors ) Nov 28 05:14:52 localhost podman[325041]: 2025-11-28 10:14:52.992180819 +0000 UTC m=+0.096906913 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:14:53 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:14:53 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "format": "json"}]: dispatch Nov 28 05:14:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8d719993-3b66-454f-a026-687de7e6b3e4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8d719993-3b66-454f-a026-687de7e6b3e4, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:53 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:53.275+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d719993-3b66-454f-a026-687de7e6b3e4' of type subvolume Nov 28 05:14:53 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8d719993-3b66-454f-a026-687de7e6b3e4' of type subvolume Nov 28 05:14:53 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8d719993-3b66-454f-a026-687de7e6b3e4", "force": true, "format": "json"}]: dispatch Nov 28 05:14:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:14:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8d719993-3b66-454f-a026-687de7e6b3e4'' moved to trashcan Nov 28 05:14:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:14:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8d719993-3b66-454f-a026-687de7e6b3e4, vol_name:cephfs) < "" Nov 28 05:14:54 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 65 KiB/s wr, 3 op/s Nov 28 05:14:54 localhost nova_compute[280168]: 2025-11-28 10:14:54.235 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:54 localhost nova_compute[280168]: 2025-11-28 10:14:54.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:54 localhost nova_compute[280168]: 2025-11-28 10:14:54.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:14:54 localhost nova_compute[280168]: 2025-11-28 10:14:54.604 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:55 localhost nova_compute[280168]: 2025-11-28 10:14:55.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:55 localhost nova_compute[280168]: 2025-11-28 10:14:55.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:14:55 localhost nova_compute[280168]: 2025-11-28 10:14:55.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:14:55 localhost nova_compute[280168]: 2025-11-28 10:14:55.263 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:14:55 localhost nova_compute[280168]: 2025-11-28 10:14:55.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:55 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:14:55 localhost systemd[1]: tmp-crun.CdmIMV.mount: Deactivated successfully. Nov 28 05:14:55 localhost podman[325065]: 2025-11-28 10:14:55.978526467 +0000 UTC m=+0.081938611 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd) Nov 28 05:14:55 localhost podman[325065]: 2025-11-28 10:14:55.991399935 +0000 UTC m=+0.094812079 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2) Nov 28 05:14:56 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:14:56 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 65 KiB/s wr, 3 op/s Nov 28 05:14:56 localhost nova_compute[280168]: 2025-11-28 10:14:56.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "auth_id": "admin", "format": "json"}]: dispatch Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:56 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:56.630+0000 7fcc87448640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Nov 28 05:14:56 localhost ceph-mgr[286188]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist Nov 28 05:14:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "format": "json"}]: dispatch Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4c25470d-c14c-4093-b430-b79c735aaf06, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4c25470d-c14c-4093-b430-b79c735aaf06, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:14:56 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:14:56.731+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c25470d-c14c-4093-b430-b79c735aaf06' of type subvolume Nov 28 05:14:56 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4c25470d-c14c-4093-b430-b79c735aaf06' of type subvolume Nov 28 05:14:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4c25470d-c14c-4093-b430-b79c735aaf06", "force": true, "format": "json"}]: dispatch Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4c25470d-c14c-4093-b430-b79c735aaf06'' moved to trashcan Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:14:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4c25470d-c14c-4093-b430-b79c735aaf06, vol_name:cephfs) < "" Nov 28 05:14:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:14:57 localhost nova_compute[280168]: 2025-11-28 10:14:57.339 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:57 localhost openstack_network_exporter[240973]: ERROR 10:14:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:14:57 localhost openstack_network_exporter[240973]: ERROR 10:14:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:14:57 localhost openstack_network_exporter[240973]: ERROR 10:14:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:14:57 localhost openstack_network_exporter[240973]: ERROR 10:14:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:14:57 localhost openstack_network_exporter[240973]: Nov 28 05:14:57 localhost openstack_network_exporter[240973]: ERROR 10:14:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:14:57 localhost openstack_network_exporter[240973]: Nov 28 05:14:58 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 106 KiB/s wr, 5 op/s Nov 28 05:14:58 localhost podman[239012]: time="2025-11-28T10:14:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:14:58 localhost podman[239012]: @ - - [28/Nov/2025:10:14:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:14:58 localhost podman[239012]: @ - - [28/Nov/2025:10:14:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19255 "" "Go-http-client/1.1" Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.240 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.267 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.268 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.268 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.268 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.269 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.630 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:14:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:14:59 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/149668244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.741 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.902 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.904 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11444MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.904 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.904 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.984 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:14:59 localhost nova_compute[280168]: 2025-11-28 10:14:59.985 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.016 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:15:00 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 75 KiB/s wr, 3 op/s Nov 28 05:15:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:15:00 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1184699795' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.469 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.477 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.492 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.494 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.494 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:15:00 localhost ovn_metadata_agent[158525]: 2025-11-28 10:15:00.760 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:15:00 localhost ovn_metadata_agent[158525]: 2025-11-28 10:15:00.761 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:15:00 localhost nova_compute[280168]: 2025-11-28 10:15:00.781 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:02 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 77 KiB/s wr, 5 op/s Nov 28 05:15:02 localhost nova_compute[280168]: 2025-11-28 10:15:02.386 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:04 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v710: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 44 KiB/s wr, 3 op/s Nov 28 05:15:04 localhost nova_compute[280168]: 2025-11-28 10:15:04.671 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:15:05 Nov 28 05:15:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:15:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:15:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['vms', 'volumes', '.mgr', 'manila_metadata', 'manila_data', 'images', 'backups'] Nov 28 05:15:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:15:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:15:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:15:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:15:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:15:05 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:15:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:15:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:15:05 localhost podman[325128]: 2025-11-28 10:15:05.979156389 +0000 UTC m=+0.086457664 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350) Nov 28 05:15:05 localhost podman[325128]: 2025-11-28 10:15:05.992402707 +0000 UTC m=+0.099704032 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible) Nov 28 05:15:06 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:15:06 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 44 KiB/s wr, 3 op/s Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:15:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0026946799972082636 of space, bias 4.0, pg target 2.144965277777778 quantized to 16 (current 16) Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:15:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:15:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:07 localhost nova_compute[280168]: 2025-11-28 10:15:07.415 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:08 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 60 KiB/s wr, 3 op/s Nov 28 05:15:09 localhost nova_compute[280168]: 2025-11-28 10:15:09.709 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:10 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s Nov 28 05:15:10 localhost ovn_metadata_agent[158525]: 2025-11-28 10:15:10.763 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:15:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:12 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v714: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 19 KiB/s wr, 1 op/s Nov 28 05:15:12 localhost nova_compute[280168]: 2025-11-28 10:15:12.453 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"df", "format":"json"} v 0) Nov 28 05:15:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1275385607' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch Nov 28 05:15:13 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) Nov 28 05:15:13 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1275385607' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch Nov 28 05:15:14 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Nov 28 05:15:14 localhost nova_compute[280168]: 2025-11-28 10:15:14.750 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:16 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Nov 28 05:15:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:17 localhost nova_compute[280168]: 2025-11-28 10:15:17.487 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:18 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v717: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s wr, 0 op/s Nov 28 05:15:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e4f2bd9a-6730-4442-8982-1535b4534b94", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:15:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e4f2bd9a-6730-4442-8982-1535b4534b94, vol_name:cephfs) < "" Nov 28 05:15:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e4f2bd9a-6730-4442-8982-1535b4534b94/.meta.tmp' Nov 28 05:15:18 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e4f2bd9a-6730-4442-8982-1535b4534b94/.meta.tmp' to config b'/volumes/_nogroup/e4f2bd9a-6730-4442-8982-1535b4534b94/.meta' Nov 28 05:15:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e4f2bd9a-6730-4442-8982-1535b4534b94, vol_name:cephfs) < "" Nov 28 05:15:18 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e4f2bd9a-6730-4442-8982-1535b4534b94", "format": "json"}]: dispatch Nov 28 05:15:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e4f2bd9a-6730-4442-8982-1535b4534b94, vol_name:cephfs) < "" Nov 28 05:15:18 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e4f2bd9a-6730-4442-8982-1535b4534b94, vol_name:cephfs) < "" Nov 28 05:15:19 localhost nova_compute[280168]: 2025-11-28 10:15:19.796 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:15:19 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:15:19 localhost podman[325156]: 2025-11-28 10:15:19.996169435 +0000 UTC m=+0.087660782 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:15:20 localhost podman[325150]: 2025-11-28 10:15:19.96972866 +0000 UTC m=+0.069723039 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}) Nov 28 05:15:20 localhost podman[325149]: 2025-11-28 10:15:20.0291632 +0000 UTC m=+0.129391096 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller) Nov 28 05:15:20 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 230 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:15:20 localhost podman[325150]: 2025-11-28 10:15:20.049387694 +0000 UTC m=+0.149382063 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2) Nov 28 05:15:20 localhost podman[325148]: 2025-11-28 10:15:20.089794538 +0000 UTC m=+0.194244213 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3) Nov 28 05:15:20 localhost podman[325149]: 2025-11-28 10:15:20.098003581 +0000 UTC m=+0.198231487 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true) Nov 28 05:15:20 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:15:20 localhost podman[325148]: 2025-11-28 10:15:20.150781206 +0000 UTC m=+0.255230901 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS) Nov 28 05:15:20 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:15:20 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:15:20 localhost podman[325156]: 2025-11-28 10:15:20.256337737 +0000 UTC m=+0.347829104 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:15:20 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:15:20 localhost systemd[1]: tmp-crun.giB8y1.mount: Deactivated successfully. Nov 28 05:15:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "98a016eb-e15e-4a92-95d1-45b6c6f58025", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:15:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, vol_name:cephfs) < "" Nov 28 05:15:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/98a016eb-e15e-4a92-95d1-45b6c6f58025/.meta.tmp' Nov 28 05:15:21 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/98a016eb-e15e-4a92-95d1-45b6c6f58025/.meta.tmp' to config b'/volumes/_nogroup/98a016eb-e15e-4a92-95d1-45b6c6f58025/.meta' Nov 28 05:15:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, vol_name:cephfs) < "" Nov 28 05:15:21 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "98a016eb-e15e-4a92-95d1-45b6c6f58025", "format": "json"}]: dispatch Nov 28 05:15:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, vol_name:cephfs) < "" Nov 28 05:15:21 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, vol_name:cephfs) < "" Nov 28 05:15:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:22 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 12 KiB/s wr, 0 op/s Nov 28 05:15:22 localhost nova_compute[280168]: 2025-11-28 10:15:22.520 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:15:23 localhost podman[325229]: 2025-11-28 10:15:23.96725606 +0000 UTC m=+0.075909849 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:15:23 localhost podman[325229]: 2025-11-28 10:15:23.982458729 +0000 UTC m=+0.091112568 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter) Nov 28 05:15:23 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:15:24 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v720: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 12 KiB/s wr, 0 op/s Nov 28 05:15:24 localhost nova_compute[280168]: 2025-11-28 10:15:24.799 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "183f7a7a-5ae4-4e98-be38-4edc7d9e437a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:15:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, vol_name:cephfs) < "" Nov 28 05:15:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/183f7a7a-5ae4-4e98-be38-4edc7d9e437a/.meta.tmp' Nov 28 05:15:25 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/183f7a7a-5ae4-4e98-be38-4edc7d9e437a/.meta.tmp' to config b'/volumes/_nogroup/183f7a7a-5ae4-4e98-be38-4edc7d9e437a/.meta' Nov 28 05:15:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, vol_name:cephfs) < "" Nov 28 05:15:25 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "183f7a7a-5ae4-4e98-be38-4edc7d9e437a", "format": "json"}]: dispatch Nov 28 05:15:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, vol_name:cephfs) < "" Nov 28 05:15:25 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, vol_name:cephfs) < "" Nov 28 05:15:26 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 12 KiB/s wr, 0 op/s Nov 28 05:15:26 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:15:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:26 localhost podman[325252]: 2025-11-28 10:15:26.971906042 +0000 UTC m=+0.081768930 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2) Nov 28 05:15:26 localhost podman[325252]: 2025-11-28 10:15:26.984571931 +0000 UTC m=+0.094434819 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true) Nov 28 05:15:26 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:15:27 localhost nova_compute[280168]: 2025-11-28 10:15:27.522 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:27 localhost openstack_network_exporter[240973]: ERROR 10:15:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:15:27 localhost openstack_network_exporter[240973]: ERROR 10:15:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:15:27 localhost openstack_network_exporter[240973]: ERROR 10:15:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:15:27 localhost openstack_network_exporter[240973]: ERROR 10:15:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:15:27 localhost openstack_network_exporter[240973]: Nov 28 05:15:27 localhost openstack_network_exporter[240973]: ERROR 10:15:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:15:27 localhost openstack_network_exporter[240973]: Nov 28 05:15:28 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s wr, 2 op/s Nov 28 05:15:28 localhost podman[239012]: time="2025-11-28T10:15:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:15:28 localhost podman[239012]: @ - - [28/Nov/2025:10:15:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:15:28 localhost podman[239012]: @ - - [28/Nov/2025:10:15:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19260 "" "Go-http-client/1.1" Nov 28 05:15:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8b4e8834-f80e-4f71-9b6d-4708a36d1242", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:15:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, vol_name:cephfs) < "" Nov 28 05:15:29 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8b4e8834-f80e-4f71-9b6d-4708a36d1242/.meta.tmp' Nov 28 05:15:29 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8b4e8834-f80e-4f71-9b6d-4708a36d1242/.meta.tmp' to config b'/volumes/_nogroup/8b4e8834-f80e-4f71-9b6d-4708a36d1242/.meta' Nov 28 05:15:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, vol_name:cephfs) < "" Nov 28 05:15:29 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8b4e8834-f80e-4f71-9b6d-4708a36d1242", "format": "json"}]: dispatch Nov 28 05:15:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, vol_name:cephfs) < "" Nov 28 05:15:29 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, vol_name:cephfs) < "" Nov 28 05:15:29 localhost nova_compute[280168]: 2025-11-28 10:15:29.832 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:30 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s wr, 2 op/s Nov 28 05:15:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:32 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s wr, 3 op/s Nov 28 05:15:32 localhost nova_compute[280168]: 2025-11-28 10:15:32.547 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8b4e8834-f80e-4f71-9b6d-4708a36d1242", "format": "json"}]: dispatch Nov 28 05:15:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:32 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:15:32.924+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8b4e8834-f80e-4f71-9b6d-4708a36d1242' of type subvolume Nov 28 05:15:32 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8b4e8834-f80e-4f71-9b6d-4708a36d1242' of type subvolume Nov 28 05:15:32 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8b4e8834-f80e-4f71-9b6d-4708a36d1242", "force": true, "format": "json"}]: dispatch Nov 28 05:15:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, vol_name:cephfs) < "" Nov 28 05:15:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8b4e8834-f80e-4f71-9b6d-4708a36d1242'' moved to trashcan Nov 28 05:15:32 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:15:32 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8b4e8834-f80e-4f71-9b6d-4708a36d1242, vol_name:cephfs) < "" Nov 28 05:15:34 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s wr, 2 op/s Nov 28 05:15:34 localhost nova_compute[280168]: 2025-11-28 10:15:34.857 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:15:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:15:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:15:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:15:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:15:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:15:36 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 231 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 61 KiB/s wr, 2 op/s Nov 28 05:15:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "183f7a7a-5ae4-4e98-be38-4edc7d9e437a", "format": "json"}]: dispatch Nov 28 05:15:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:36 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:15:36.139+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '183f7a7a-5ae4-4e98-be38-4edc7d9e437a' of type subvolume Nov 28 05:15:36 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '183f7a7a-5ae4-4e98-be38-4edc7d9e437a' of type subvolume Nov 28 05:15:36 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "183f7a7a-5ae4-4e98-be38-4edc7d9e437a", "force": true, "format": "json"}]: dispatch Nov 28 05:15:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, vol_name:cephfs) < "" Nov 28 05:15:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/183f7a7a-5ae4-4e98-be38-4edc7d9e437a'' moved to trashcan Nov 28 05:15:36 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:15:36 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:183f7a7a-5ae4-4e98-be38-4edc7d9e437a, vol_name:cephfs) < "" Nov 28 05:15:36 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:15:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:36 localhost podman[325288]: 2025-11-28 10:15:36.978363261 +0000 UTC m=+0.088207438 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, release=1755695350, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers) Nov 28 05:15:36 localhost podman[325288]: 2025-11-28 10:15:36.990097752 +0000 UTC m=+0.099941969 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git) Nov 28 05:15:37 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:15:37 localhost nova_compute[280168]: 2025-11-28 10:15:37.592 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:15:37 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:15:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:15:37 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:15:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:15:37 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev e93e93e0-f3e6-4b8e-b81d-53d2f3b70421 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:15:37 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev e93e93e0-f3e6-4b8e-b81d-53d2f3b70421 (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:15:37 localhost ceph-mgr[286188]: [progress INFO root] Completed event e93e93e0-f3e6-4b8e-b81d-53d2f3b70421 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:15:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:15:37 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:15:38 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 107 KiB/s wr, 4 op/s Nov 28 05:15:38 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:15:38 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:15:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "98a016eb-e15e-4a92-95d1-45b6c6f58025", "format": "json"}]: dispatch Nov 28 05:15:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:39 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:15:39.341+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98a016eb-e15e-4a92-95d1-45b6c6f58025' of type subvolume Nov 28 05:15:39 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '98a016eb-e15e-4a92-95d1-45b6c6f58025' of type subvolume Nov 28 05:15:39 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "98a016eb-e15e-4a92-95d1-45b6c6f58025", "force": true, "format": "json"}]: dispatch Nov 28 05:15:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, vol_name:cephfs) < "" Nov 28 05:15:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/98a016eb-e15e-4a92-95d1-45b6c6f58025'' moved to trashcan Nov 28 05:15:39 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:15:39 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:98a016eb-e15e-4a92-95d1-45b6c6f58025, vol_name:cephfs) < "" Nov 28 05:15:39 localhost nova_compute[280168]: 2025-11-28 10:15:39.881 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:40 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 66 KiB/s wr, 2 op/s Nov 28 05:15:41 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:15:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:15:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:42 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 90 KiB/s wr, 4 op/s Nov 28 05:15:42 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:15:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e4f2bd9a-6730-4442-8982-1535b4534b94", "format": "json"}]: dispatch Nov 28 05:15:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e4f2bd9a-6730-4442-8982-1535b4534b94, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e4f2bd9a-6730-4442-8982-1535b4534b94, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:15:42 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:15:42.533+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e4f2bd9a-6730-4442-8982-1535b4534b94' of type subvolume Nov 28 05:15:42 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e4f2bd9a-6730-4442-8982-1535b4534b94' of type subvolume Nov 28 05:15:42 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e4f2bd9a-6730-4442-8982-1535b4534b94", "force": true, "format": "json"}]: dispatch Nov 28 05:15:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e4f2bd9a-6730-4442-8982-1535b4534b94, vol_name:cephfs) < "" Nov 28 05:15:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e4f2bd9a-6730-4442-8982-1535b4534b94'' moved to trashcan Nov 28 05:15:42 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:15:42 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e4f2bd9a-6730-4442-8982-1535b4534b94, vol_name:cephfs) < "" Nov 28 05:15:42 localhost nova_compute[280168]: 2025-11-28 10:15:42.638 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:44 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 70 KiB/s wr, 3 op/s Nov 28 05:15:44 localhost nova_compute[280168]: 2025-11-28 10:15:44.912 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:46 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 70 KiB/s wr, 3 op/s Nov 28 05:15:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:47 localhost nova_compute[280168]: 2025-11-28 10:15:47.671 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:48 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 95 KiB/s wr, 4 op/s Nov 28 05:15:49 localhost nova_compute[280168]: 2025-11-28 10:15:49.956 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:50 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 49 KiB/s wr, 3 op/s Nov 28 05:15:50 localhost nova_compute[280168]: 2025-11-28 10:15:50.489 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:15:50.859 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:15:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:15:50.859 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:15:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:15:50.859 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:15:50 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:15:51 localhost podman[325379]: 2025-11-28 10:15:51.007018372 +0000 UTC m=+0.100496637 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:15:51 localhost podman[325379]: 2025-11-28 10:15:51.039472472 +0000 UTC m=+0.132950747 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2) Nov 28 05:15:51 localhost systemd[1]: tmp-crun.sgH9DI.mount: Deactivated successfully. Nov 28 05:15:51 localhost podman[325378]: 2025-11-28 10:15:51.054511874 +0000 UTC m=+0.151326641 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0) Nov 28 05:15:51 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:15:51 localhost podman[325380]: 2025-11-28 10:15:51.146281641 +0000 UTC m=+0.233813252 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:15:51 localhost podman[325380]: 2025-11-28 10:15:51.152449561 +0000 UTC m=+0.239981182 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible) Nov 28 05:15:51 localhost podman[325378]: 2025-11-28 10:15:51.1615082 +0000 UTC m=+0.258323007 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true) Nov 28 05:15:51 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:15:51 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:15:51 localhost podman[325377]: 2025-11-28 10:15:51.201354648 +0000 UTC m=+0.299098173 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:15:51 localhost podman[325377]: 2025-11-28 10:15:51.219536997 +0000 UTC m=+0.317280492 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image) Nov 28 05:15:51 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:15:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:52 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 56 KiB/s wr, 3 op/s Nov 28 05:15:52 localhost nova_compute[280168]: 2025-11-28 10:15:52.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:52 localhost nova_compute[280168]: 2025-11-28 10:15:52.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:52 localhost nova_compute[280168]: 2025-11-28 10:15:52.700 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:53 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:15:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:15:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta.tmp' Nov 28 05:15:53 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta.tmp' to config b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta' Nov 28 05:15:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:15:53 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "format": "json"}]: dispatch Nov 28 05:15:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:15:53 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:15:54 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 32 KiB/s wr, 1 op/s Nov 28 05:15:54 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:15:54 localhost nova_compute[280168]: 2025-11-28 10:15:54.998 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:55 localhost podman[325461]: 2025-11-28 10:15:55.006794141 +0000 UTC m=+0.112674021 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:15:55 localhost podman[325461]: 2025-11-28 10:15:55.044399779 +0000 UTC m=+0.150279639 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:15:55 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:15:55 localhost nova_compute[280168]: 2025-11-28 10:15:55.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:55 localhost nova_compute[280168]: 2025-11-28 10:15:55.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:15:55 localhost nova_compute[280168]: 2025-11-28 10:15:55.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:56 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 232 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 32 KiB/s wr, 1 op/s Nov 28 05:15:56 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "snap_name": "489a50ca-e11a-4c64-8d66-724b9734ba9f", "format": "json"}]: dispatch Nov 28 05:15:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:489a50ca-e11a-4c64-8d66-724b9734ba9f, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:15:56 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:489a50ca-e11a-4c64-8d66-724b9734ba9f, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:15:56 localhost nova_compute[280168]: 2025-11-28 10:15:56.249 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:56 localhost nova_compute[280168]: 2025-11-28 10:15:56.249 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:56 localhost nova_compute[280168]: 2025-11-28 10:15:56.250 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:15:56 localhost nova_compute[280168]: 2025-11-28 10:15:56.250 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:15:56 localhost nova_compute[280168]: 2025-11-28 10:15:56.266 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:15:56 localhost nova_compute[280168]: 2025-11-28 10:15:56.266 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:15:57 localhost nova_compute[280168]: 2025-11-28 10:15:57.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:57 localhost openstack_network_exporter[240973]: ERROR 10:15:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:15:57 localhost openstack_network_exporter[240973]: ERROR 10:15:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:15:57 localhost openstack_network_exporter[240973]: ERROR 10:15:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:15:57 localhost openstack_network_exporter[240973]: ERROR 10:15:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:15:57 localhost openstack_network_exporter[240973]: Nov 28 05:15:57 localhost openstack_network_exporter[240973]: ERROR 10:15:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:15:57 localhost openstack_network_exporter[240973]: Nov 28 05:15:57 localhost nova_compute[280168]: 2025-11-28 10:15:57.702 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:15:57 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:15:57 localhost podman[325485]: 2025-11-28 10:15:57.966341873 +0000 UTC m=+0.074845167 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3) Nov 28 05:15:57 localhost podman[325485]: 2025-11-28 10:15:57.977199956 +0000 UTC m=+0.085703240 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS) Nov 28 05:15:57 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:15:58 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 50 KiB/s wr, 2 op/s Nov 28 05:15:58 localhost podman[239012]: time="2025-11-28T10:15:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:15:58 localhost podman[239012]: @ - - [28/Nov/2025:10:15:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:15:58 localhost podman[239012]: @ - - [28/Nov/2025:10:15:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19253 "" "Go-http-client/1.1" Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.267 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.267 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.268 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.268 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.269 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:15:59 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:15:59 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1387439949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.728 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.931 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.933 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11440MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.933 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.934 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.994 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:15:59 localhost nova_compute[280168]: 2025-11-28 10:15:59.995 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.040 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:00 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s wr, 1 op/s Nov 28 05:16:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "snap_name": "489a50ca-e11a-4c64-8d66-724b9734ba9f_7b47d2ce-aead-4b15-bdda-8d040cf088ac", "force": true, "format": "json"}]: dispatch Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:489a50ca-e11a-4c64-8d66-724b9734ba9f_7b47d2ce-aead-4b15-bdda-8d040cf088ac, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta.tmp' Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta.tmp' to config b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta' Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:489a50ca-e11a-4c64-8d66-724b9734ba9f_7b47d2ce-aead-4b15-bdda-8d040cf088ac, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:16:00 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "snap_name": "489a50ca-e11a-4c64-8d66-724b9734ba9f", "force": true, "format": "json"}]: dispatch Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:489a50ca-e11a-4c64-8d66-724b9734ba9f, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta.tmp' Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta.tmp' to config b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb/.meta' Nov 28 05:16:00 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:489a50ca-e11a-4c64-8d66-724b9734ba9f, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.177 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.629 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.630 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.631 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.632 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.633 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.634 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.634 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.634 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.634 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.635 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.635 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceilometer_agent_compute[236400]: 2025-11-28 10:16:00.635 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193 Nov 28 05:16:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:16:00 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3856985557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.658 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.665 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.686 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.689 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.689 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.690 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.690 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m Nov 28 05:16:00 localhost nova_compute[280168]: 2025-11-28 10:16:00.704 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m Nov 28 05:16:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:02 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 51 KiB/s wr, 2 op/s Nov 28 05:16:02 localhost nova_compute[280168]: 2025-11-28 10:16:02.731 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "format": "json"}]: dispatch Nov 28 05:16:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:16:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:16:03 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:16:03.267+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4650ef04-7360-41ee-b6b9-a66770a7edbb' of type subvolume Nov 28 05:16:03 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4650ef04-7360-41ee-b6b9-a66770a7edbb' of type subvolume Nov 28 05:16:03 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4650ef04-7360-41ee-b6b9-a66770a7edbb", "force": true, "format": "json"}]: dispatch Nov 28 05:16:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:16:03 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4650ef04-7360-41ee-b6b9-a66770a7edbb'' moved to trashcan Nov 28 05:16:03 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:16:03 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4650ef04-7360-41ee-b6b9-a66770a7edbb, vol_name:cephfs) < "" Nov 28 05:16:04 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 45 KiB/s wr, 2 op/s Nov 28 05:16:05 localhost nova_compute[280168]: 2025-11-28 10:16:05.092 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:16:05 Nov 28 05:16:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:16:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:16:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_data', 'backups', 'volumes', '.mgr', 'images', 'vms', 'manila_metadata'] Nov 28 05:16:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:16:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:16:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:16:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:16:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:16:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:16:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:16:06 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 45 KiB/s wr, 2 op/s Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32) Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:16:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002881977160106086 of space, bias 4.0, pg target 2.294053819444444 quantized to 16 (current 16) Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:16:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e291 e291: 6 total, 6 up, 6 in Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:16:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:16:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:07 localhost ovn_metadata_agent[158525]: 2025-11-28 10:16:07.451 158530 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '92:49:97', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'ca:ab:0a:de:51:20'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m Nov 28 05:16:07 localhost nova_compute[280168]: 2025-11-28 10:16:07.451 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:07 localhost ovn_metadata_agent[158525]: 2025-11-28 10:16:07.452 158530 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m Nov 28 05:16:07 localhost nova_compute[280168]: 2025-11-28 10:16:07.776 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:07 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:16:07 localhost podman[325548]: 2025-11-28 10:16:07.978524539 +0000 UTC m=+0.081988597 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=) Nov 28 05:16:07 localhost podman[325548]: 2025-11-28 10:16:07.99544189 +0000 UTC m=+0.098905968 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.) Nov 28 05:16:08 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:16:08 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 63 KiB/s wr, 3 op/s Nov 28 05:16:10 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 63 KiB/s wr, 3 op/s Nov 28 05:16:10 localhost nova_compute[280168]: 2025-11-28 10:16:10.142 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:12 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 2 op/s Nov 28 05:16:12 localhost nova_compute[280168]: 2025-11-28 10:16:12.809 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch Nov 28 05:16:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta.tmp' Nov 28 05:16:13 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta.tmp' to config b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta' Nov 28 05:16:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:13 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "format": "json"}]: dispatch Nov 28 05:16:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:13 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:14 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 2 op/s Nov 28 05:16:15 localhost nova_compute[280168]: 2025-11-28 10:16:15.177 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:15 localhost nova_compute[280168]: 2025-11-28 10:16:15.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:15 localhost nova_compute[280168]: 2025-11-28 10:16:15.238 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Cleaning up deleted instances with incomplete migration _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m Nov 28 05:16:15 localhost ovn_metadata_agent[158525]: 2025-11-28 10:16:15.455 158530 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=62c03cad-89c1-4fd7-973b-8f2a608c71f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m Nov 28 05:16:16 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 233 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 49 KiB/s wr, 2 op/s Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0. Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.165795) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324976165872, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2496, "num_deletes": 251, "total_data_size": 3083912, "memory_usage": 3136896, "flush_reason": "Manual Compaction"} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324976178596, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 1989878, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33070, "largest_seqno": 35561, "table_properties": {"data_size": 1980900, "index_size": 5423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 21281, "raw_average_key_size": 21, "raw_value_size": 1961837, "raw_average_value_size": 1971, "num_data_blocks": 233, "num_entries": 995, "num_filter_entries": 995, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764324812, "oldest_key_time": 1764324812, "file_creation_time": 1764324976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 12860 microseconds, and 5386 cpu microseconds. Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.178651) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 1989878 bytes OK Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.178678) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.183087) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.183119) EVENT_LOG_v1 {"time_micros": 1764324976183102, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.183137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3072621, prev total WAL file size 3072621, number of live WAL files 2. Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.183999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end) Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(1943KB)], [51(18MB)] Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324976184112, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 21596874, "oldest_snapshot_seqno": -1} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14829 keys, 20052149 bytes, temperature: kUnknown Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324976315648, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 20052149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19964381, "index_size": 49587, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 37125, "raw_key_size": 394594, "raw_average_key_size": 26, "raw_value_size": 19710102, "raw_average_value_size": 1329, "num_data_blocks": 1867, "num_entries": 14829, "num_filter_entries": 14829, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764323786, "oldest_key_time": 0, "file_creation_time": 1764324976, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "75e61b0e-4f73-4b03-b096-8587ecbe7a9f", "db_session_id": "7KM5GJAJPD54H6HSLJHG", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed. Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.315956) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 20052149 bytes Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.317755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.1 rd, 152.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 18.7 +0.0 blob) out(19.1 +0.0 blob), read-write-amplify(20.9) write-amplify(10.1) OK, records in: 15362, records dropped: 533 output_compression: NoCompression Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.317784) EVENT_LOG_v1 {"time_micros": 1764324976317772, "job": 30, "event": "compaction_finished", "compaction_time_micros": 131624, "compaction_time_cpu_micros": 52825, "output_level": 6, "num_output_files": 1, "total_output_size": 20052149, "num_input_records": 15362, "num_output_records": 14829, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324976318250, "job": 30, "event": "table_file_deletion", "file_number": 53} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005538515/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764324976321200, "job": 30, "event": "table_file_deletion", "file_number": 51} Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.183921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.321274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.321280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.321284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.321287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:16:16 localhost ceph-mon[301134]: rocksdb: (Original Log Time 2025/11/28-10:16:16.321291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting Nov 28 05:16:16 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "snap_name": "f728da4c-905d-477e-bad7-75aaf837cd76", "format": "json"}]: dispatch Nov 28 05:16:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f728da4c-905d-477e-bad7-75aaf837cd76, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:16 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f728da4c-905d-477e-bad7-75aaf837cd76, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:17 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e292 e292: 6 total, 6 up, 6 in Nov 28 05:16:17 localhost nova_compute[280168]: 2025-11-28 10:16:17.844 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:18 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s wr, 1 op/s Nov 28 05:16:20 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s wr, 1 op/s Nov 28 05:16:20 localhost nova_compute[280168]: 2025-11-28 10:16:20.213 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "snap_name": "f728da4c-905d-477e-bad7-75aaf837cd76_f56b26cb-b1ec-4894-975d-4a0ccc289bb1", "force": true, "format": "json"}]: dispatch Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f728da4c-905d-477e-bad7-75aaf837cd76_f56b26cb-b1ec-4894-975d-4a0ccc289bb1, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta.tmp' Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta.tmp' to config b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta' Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f728da4c-905d-477e-bad7-75aaf837cd76_f56b26cb-b1ec-4894-975d-4a0ccc289bb1, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:20 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "snap_name": "f728da4c-905d-477e-bad7-75aaf837cd76", "force": true, "format": "json"}]: dispatch Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f728da4c-905d-477e-bad7-75aaf837cd76, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta.tmp' Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta.tmp' to config b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95/.meta' Nov 28 05:16:20 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f728da4c-905d-477e-bad7-75aaf837cd76, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:20 localhost nova_compute[280168]: 2025-11-28 10:16:20.801 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:16:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:16:21 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:16:22 localhost podman[325575]: 2025-11-28 10:16:21.962316727 +0000 UTC m=+0.055734788 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125) Nov 28 05:16:22 localhost podman[325568]: 2025-11-28 10:16:22.043988812 +0000 UTC m=+0.147645858 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true) Nov 28 05:16:22 localhost podman[325569]: 2025-11-28 10:16:21.997276284 +0000 UTC m=+0.091351085 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:16:22 localhost podman[325576]: 2025-11-28 10:16:22.054668361 +0000 UTC m=+0.140627152 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi ) Nov 28 05:16:22 localhost podman[325568]: 2025-11-28 10:16:22.055390884 +0000 UTC m=+0.159047940 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute) Nov 28 05:16:22 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:16:22 localhost podman[325569]: 2025-11-28 10:16:22.079540127 +0000 UTC m=+0.173614928 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0) Nov 28 05:16:22 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s wr, 2 op/s Nov 28 05:16:22 localhost podman[325575]: 2025-11-28 10:16:22.09521784 +0000 UTC m=+0.188635981 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:16:22 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:16:22 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:16:22 localhost podman[325576]: 2025-11-28 10:16:22.136598485 +0000 UTC m=+0.222557246 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:16:22 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:16:22 localhost nova_compute[280168]: 2025-11-28 10:16:22.880 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:22 localhost systemd[1]: tmp-crun.xqbxYO.mount: Deactivated successfully. Nov 28 05:16:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "format": "json"}]: dispatch Nov 28 05:16:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:16:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, format:json, prefix:fs clone status, vol_name:cephfs) < "" Nov 28 05:16:23 localhost ceph-2c5417c9-00eb-57d5-a565-ddecbc7995c1-mgr-np0005538515-yfkzhl[286184]: 2025-11-28T10:16:23.794+0000 7fcc87448640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2507d4ac-39eb-44f4-bc88-d8388bf61f95' of type subvolume Nov 28 05:16:23 localhost ceph-mgr[286188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2507d4ac-39eb-44f4-bc88-d8388bf61f95' of type subvolume Nov 28 05:16:23 localhost ceph-mgr[286188]: log_channel(audit) log [DBG] : from='client.15651 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2507d4ac-39eb-44f4-bc88-d8388bf61f95", "force": true, "format": "json"}]: dispatch Nov 28 05:16:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2507d4ac-39eb-44f4-bc88-d8388bf61f95'' moved to trashcan Nov 28 05:16:23 localhost ceph-mgr[286188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs' Nov 28 05:16:23 localhost ceph-mgr[286188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2507d4ac-39eb-44f4-bc88-d8388bf61f95, vol_name:cephfs) < "" Nov 28 05:16:24 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v752: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s wr, 2 op/s Nov 28 05:16:25 localhost nova_compute[280168]: 2025-11-28 10:16:25.250 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:25 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:16:25 localhost podman[325650]: 2025-11-28 10:16:25.974388215 +0000 UTC m=+0.080340365 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm) Nov 28 05:16:25 localhost podman[325650]: 2025-11-28 10:16:25.987420867 +0000 UTC m=+0.093373067 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible) Nov 28 05:16:26 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:16:26 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s wr, 2 op/s Nov 28 05:16:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e293 e293: 6 total, 6 up, 6 in Nov 28 05:16:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- Nov 28 05:16:26 localhost ceph-mon[301134]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 4735 writes, 35K keys, 4735 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 4735 writes, 4735 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2458 writes, 13K keys, 2458 commit groups, 1.0 writes per commit group, ingest: 17.89 MB, 0.03 MB/s#012Interval WAL: 2458 writes, 2458 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 L0 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 117.0 0.32 0.10 15 0.022 0 0 0.0 0.0#012 L6 1/0 19.12 MB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 6.5 149.5 138.8 1.77 0.66 14 0.127 188K 7204 0.0 0.0#012 Sum 1/0 19.12 MB 0.0 0.3 0.0 0.2 0.3 0.1 0.0 7.5 126.3 135.4 2.10 0.76 29 0.072 188K 7204 0.0 0.0#012 Int 0/0 0.00 KB 0.0 0.2 0.0 0.1 0.2 0.0 0.0 12.1 134.8 137.7 1.14 0.42 16 0.071 113K 4315 0.0 0.0#012#012** Compaction Stats [default] **#012Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low 0/0 0.00 KB 0.0 0.3 0.0 0.2 0.2 0.0 0.0 0.0 149.5 138.8 1.77 0.66 14 0.127 188K 7204 0.0 0.0#012High 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 117.8 0.32 0.10 14 0.023 0 0 0.0 0.0#012User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.8 0.00 0.00 1 0.002 0 0 0.0 0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.037, interval 0.013#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.28 GB write, 0.24 MB/s write, 0.26 GB read, 0.22 MB/s read, 2.1 seconds#012Interval compaction: 0.15 GB write, 0.26 MB/s write, 0.15 GB read, 0.26 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x561ae0707350#2 capacity: 304.00 MB usage: 21.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000251 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1297,20.07 MB,6.6025%) FilterBlock(29,556.11 KB,0.178643%) IndexBlock(29,726.11 KB,0.233254%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] ** Nov 28 05:16:26 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:27 localhost openstack_network_exporter[240973]: ERROR 10:16:27 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:16:27 localhost openstack_network_exporter[240973]: ERROR 10:16:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:16:27 localhost openstack_network_exporter[240973]: ERROR 10:16:27 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:16:27 localhost openstack_network_exporter[240973]: ERROR 10:16:27 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:16:27 localhost openstack_network_exporter[240973]: Nov 28 05:16:27 localhost openstack_network_exporter[240973]: ERROR 10:16:27 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:16:27 localhost openstack_network_exporter[240973]: Nov 28 05:16:27 localhost nova_compute[280168]: 2025-11-28 10:16:27.882 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:28 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 61 KiB/s wr, 3 op/s Nov 28 05:16:28 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:16:28 localhost podman[239012]: time="2025-11-28T10:16:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:16:28 localhost podman[239012]: @ - - [28/Nov/2025:10:16:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:16:28 localhost podman[239012]: @ - - [28/Nov/2025:10:16:28 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19252 "" "Go-http-client/1.1" Nov 28 05:16:29 localhost systemd[1]: tmp-crun.7dJ4bM.mount: Deactivated successfully. Nov 28 05:16:29 localhost podman[325673]: 2025-11-28 10:16:29.047305058 +0000 UTC m=+0.151133376 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd) Nov 28 05:16:29 localhost podman[325673]: 2025-11-28 10:16:29.060513765 +0000 UTC m=+0.164342063 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:16:29 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:16:30 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 61 KiB/s wr, 3 op/s Nov 28 05:16:30 localhost nova_compute[280168]: 2025-11-28 10:16:30.255 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:31 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:32 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v757: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 57 KiB/s wr, 3 op/s Nov 28 05:16:32 localhost nova_compute[280168]: 2025-11-28 10:16:32.914 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:34 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 57 KiB/s wr, 3 op/s Nov 28 05:16:35 localhost nova_compute[280168]: 2025-11-28 10:16:35.277 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:16:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:16:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:16:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:16:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:16:35 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:16:36 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 57 KiB/s wr, 3 op/s Nov 28 05:16:36 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:37 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 e294: 6 total, 6 up, 6 in Nov 28 05:16:37 localhost nova_compute[280168]: 2025-11-28 10:16:37.950 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:38 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s wr, 1 op/s Nov 28 05:16:38 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:16:38 localhost systemd[1]: tmp-crun.Flyulf.mount: Deactivated successfully. Nov 28 05:16:38 localhost podman[325710]: 2025-11-28 10:16:38.209931729 +0000 UTC m=+0.083577315 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter) Nov 28 05:16:38 localhost podman[325710]: 2025-11-28 10:16:38.225764827 +0000 UTC m=+0.099410433 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7) Nov 28 05:16:38 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:16:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) Nov 28 05:16:38 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "config generate-minimal-conf"} : dispatch Nov 28 05:16:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) Nov 28 05:16:38 localhost ceph-mon[301134]: log_channel(audit) log [INF] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:16:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) Nov 28 05:16:38 localhost ceph-mgr[286188]: [progress INFO root] update: starting ev 263baec8-83d9-46b6-bc6c-1803c8c8b24e (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:16:38 localhost ceph-mgr[286188]: [progress INFO root] complete: finished ev 263baec8-83d9-46b6-bc6c-1803c8c8b24e (Updating node-proxy deployment (+3 -> 3)) Nov 28 05:16:38 localhost ceph-mgr[286188]: [progress INFO root] Completed event 263baec8-83d9-46b6-bc6c-1803c8c8b24e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds Nov 28 05:16:38 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) Nov 28 05:16:38 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch Nov 28 05:16:39 localhost ceph-mon[301134]: from='mgr.34481 172.18.0.108:0/3924249495' entity='mgr.np0005538515.yfkzhl' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch Nov 28 05:16:39 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:16:40 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s wr, 1 op/s Nov 28 05:16:40 localhost nova_compute[280168]: 2025-11-28 10:16:40.321 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:41 localhost ceph-mgr[286188]: [progress INFO root] Writing back 50 completed events Nov 28 05:16:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) Nov 28 05:16:41 localhost ceph-mon[301134]: from='mgr.34481 ' entity='mgr.np0005538515.yfkzhl' Nov 28 05:16:41 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:42 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:42 localhost nova_compute[280168]: 2025-11-28 10:16:42.994 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:44 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:45 localhost nova_compute[280168]: 2025-11-28 10:16:45.357 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:46 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:46 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:48 localhost nova_compute[280168]: 2025-11-28 10:16:48.029 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:48 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:50 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:50 localhost nova_compute[280168]: 2025-11-28 10:16:50.399 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:16:50.860 158530 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:16:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:16:50.861 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:16:50 localhost ovn_metadata_agent[158525]: 2025-11-28 10:16:50.861 158530 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:16:51 localhost nova_compute[280168]: 2025-11-28 10:16:51.771 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:51 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:52 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:16:52 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:16:53 localhost nova_compute[280168]: 2025-11-28 10:16:53.054 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:53 localhost podman[325796]: 2025-11-28 10:16:53.059255828 +0000 UTC m=+0.161272009 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0) Nov 28 05:16:53 localhost podman[325798]: 2025-11-28 10:16:53.064440568 +0000 UTC m=+0.154759519 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true) Nov 28 05:16:53 localhost podman[325797]: 2025-11-28 10:16:52.97296731 +0000 UTC m=+0.075302861 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller) Nov 28 05:16:53 localhost podman[325796]: 2025-11-28 10:16:53.075359054 +0000 UTC m=+0.177375165 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629) Nov 28 05:16:53 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:16:53 localhost podman[325798]: 2025-11-28 10:16:53.100462007 +0000 UTC m=+0.190780958 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team) Nov 28 05:16:53 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:16:53 localhost podman[325797]: 2025-11-28 10:16:53.156593556 +0000 UTC m=+0.258929177 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}) Nov 28 05:16:53 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:16:53 localhost podman[325809]: 2025-11-28 10:16:53.220190644 +0000 UTC m=+0.307205952 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:16:53 localhost podman[325809]: 2025-11-28 10:16:53.231685738 +0000 UTC m=+0.318701066 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:16:53 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:16:53 localhost nova_compute[280168]: 2025-11-28 10:16:53.263 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:54 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:54 localhost nova_compute[280168]: 2025-11-28 10:16:54.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:55 localhost nova_compute[280168]: 2025-11-28 10:16:55.441 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:56 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:56 localhost nova_compute[280168]: 2025-11-28 10:16:56.234 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:56 localhost systemd[1]: Started /usr/bin/podman healthcheck run 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686. Nov 28 05:16:56 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:16:56 localhost podman[325879]: 2025-11-28 10:16:56.981625853 +0000 UTC m=+0.084479793 container health_status 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:16:56 localhost podman[325879]: 2025-11-28 10:16:56.995556672 +0000 UTC m=+0.098410622 container exec_died 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors , managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}) Nov 28 05:16:57 localhost systemd[1]: 56bbd0ca835b5b4ab7ba25672a11a9498e53aecd6d4bd842394893fa3da92686.service: Deactivated successfully. Nov 28 05:16:57 localhost nova_compute[280168]: 2025-11-28 10:16:57.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:57 localhost nova_compute[280168]: 2025-11-28 10:16:57.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:57 localhost nova_compute[280168]: 2025-11-28 10:16:57.239 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m Nov 28 05:16:57 localhost openstack_network_exporter[240973]: ERROR 10:16:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:16:57 localhost openstack_network_exporter[240973]: ERROR 10:16:57 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server Nov 28 05:16:57 localhost openstack_network_exporter[240973]: ERROR 10:16:57 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd Nov 28 05:16:57 localhost openstack_network_exporter[240973]: ERROR 10:16:57 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath Nov 28 05:16:57 localhost openstack_network_exporter[240973]: Nov 28 05:16:57 localhost openstack_network_exporter[240973]: ERROR 10:16:57 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath Nov 28 05:16:57 localhost openstack_network_exporter[240973]: Nov 28 05:16:58 localhost nova_compute[280168]: 2025-11-28 10:16:58.064 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:16:58 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:16:58 localhost nova_compute[280168]: 2025-11-28 10:16:58.239 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:58 localhost nova_compute[280168]: 2025-11-28 10:16:58.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m Nov 28 05:16:58 localhost nova_compute[280168]: 2025-11-28 10:16:58.240 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m Nov 28 05:16:58 localhost nova_compute[280168]: 2025-11-28 10:16:58.258 280172 DEBUG nova.compute.manager [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m Nov 28 05:16:58 localhost podman[239012]: time="2025-11-28T10:16:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`" Nov 28 05:16:58 localhost podman[239012]: @ - - [28/Nov/2025:10:16:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156330 "" "Go-http-client/1.1" Nov 28 05:16:58 localhost podman[239012]: @ - - [28/Nov/2025:10:16:58 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19254 "" "Go-http-client/1.1" Nov 28 05:16:59 localhost nova_compute[280168]: 2025-11-28 10:16:59.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:16:59 localhost systemd[1]: Started /usr/bin/podman healthcheck run cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f. Nov 28 05:16:59 localhost podman[325903]: 2025-11-28 10:16:59.97221492 +0000 UTC m=+0.072568445 container health_status cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.license=GPLv2) Nov 28 05:16:59 localhost podman[325903]: 2025-11-28 10:16:59.984497869 +0000 UTC m=+0.084851444 container exec_died cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd) Nov 28 05:16:59 localhost systemd[1]: cdf4e3e42cf3903eaffd27921fa3139462d8f511d1951a0793f21197fa17820f.service: Deactivated successfully. Nov 28 05:17:00 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:00 localhost sshd[325922]: main: sshd: ssh-rsa algorithm is disabled Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.238 280172 DEBUG oslo_service.periodic_task [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m Nov 28 05:17:00 localhost systemd-logind[763]: New session 75 of user zuul. Nov 28 05:17:00 localhost systemd[1]: Started Session 75 of User zuul. Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.276 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.276 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.276 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.277 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Auditing locally available compute resources for np0005538515.localdomain (node: np0005538515.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.277 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.483 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:00 localhost python3[325946]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-49a1-b30e-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None Nov 28 05:17:00 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:17:00 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/847218026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.705 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.910 280172 WARNING nova.virt.libvirt.driver [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.911 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Hypervisor/Node resource view: name=np0005538515.localdomain free_ram=11392MB free_disk=41.83686447143555GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.912 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.912 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.991 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m Nov 28 05:17:00 localhost nova_compute[280168]: 2025-11-28 10:17:00.992 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Final resource view: name=np0005538515.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.147 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing inventories for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.219 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating ProviderTree inventory for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.219 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Updating inventory in ProviderTree for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.235 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing aggregate associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.259 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Refreshing trait associations for resource provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0, traits: COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_AVX,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AESNI,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AMD_SVM,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_F16C,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSSE3,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_FMA3,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NODE,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.291 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m Nov 28 05:17:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0) Nov 28 05:17:01 localhost ceph-mon[301134]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3631755571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.776 280172 DEBUG oslo_concurrency.processutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.783 280172 DEBUG nova.compute.provider_tree [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed in ProviderTree for provider: 72fba1ca-0d86-48af-8a3d-510284dfd0e0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.799 280172 DEBUG nova.scheduler.client.report [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Inventory has not changed for provider 72fba1ca-0d86-48af-8a3d-510284dfd0e0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.802 280172 DEBUG nova.compute.resource_tracker [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Compute_service record updated for np0005538515.localdomain:np0005538515.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m Nov 28 05:17:01 localhost nova_compute[280168]: 2025-11-28 10:17:01.802 280172 DEBUG oslo_concurrency.lockutils [None req-def660ec-de0f-46dd-a9a8-d8f091e83f66 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m Nov 28 05:17:01 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:17:02 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:03 localhost nova_compute[280168]: 2025-11-28 10:17:03.109 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:04 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:05 localhost systemd[1]: session-75.scope: Deactivated successfully. Nov 28 05:17:05 localhost systemd-logind[763]: Session 75 logged out. Waiting for processes to exit. Nov 28 05:17:05 localhost systemd-logind[763]: Removed session 75. Nov 28 05:17:05 localhost nova_compute[280168]: 2025-11-28 10:17:05.535 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:05 localhost ceph-mgr[286188]: [balancer INFO root] Optimize plan auto_2025-11-28_10:17:05 Nov 28 05:17:05 localhost ceph-mgr[286188]: [balancer INFO root] Mode upmap, max misplaced 0.050000 Nov 28 05:17:05 localhost ceph-mgr[286188]: [balancer INFO root] do_upmap Nov 28 05:17:05 localhost ceph-mgr[286188]: [balancer INFO root] pools ['manila_data', 'backups', 'volumes', 'vms', '.mgr', 'images', 'manila_metadata'] Nov 28 05:17:05 localhost ceph-mgr[286188]: [balancer INFO root] prepared 0/10 changes Nov 28 05:17:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:17:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:17:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:17:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:17:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] scanning for idle connections.. Nov 28 05:17:05 localhost ceph-mgr[286188]: [volumes INFO mgr_util] cleaning up connections: [] Nov 28 05:17:06 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] _maybe_adjust Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1) Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003328000680485762 of space, bias 1.0, pg target 0.6656001360971524 quantized to 32 (current 32) Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32) Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32) Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32) Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32) Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784 Nov 28 05:17:06 localhost ceph-mgr[286188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002986939907872697 of space, bias 4.0, pg target 2.3776041666666665 quantized to 16 (current 16) Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: vms, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: volumes, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: images, start_after= Nov 28 05:17:06 localhost ceph-mgr[286188]: [rbd_support INFO root] load_schedules: backups, start_after= Nov 28 05:17:06 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:17:08 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:08 localhost nova_compute[280168]: 2025-11-28 10:17:08.147 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:08 localhost systemd[1]: Started /usr/bin/podman healthcheck run 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf. Nov 28 05:17:08 localhost podman[325991]: 2025-11-28 10:17:08.977856306 +0000 UTC m=+0.084082541 container health_status 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=) Nov 28 05:17:08 localhost podman[325991]: 2025-11-28 10:17:08.994502749 +0000 UTC m=+0.100728994 container exec_died 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41) Nov 28 05:17:09 localhost systemd[1]: 6c250c88fddb84b6d409058ed0d298c2f5484527eb3f5d819767f1d36839a4bf.service: Deactivated successfully. Nov 28 05:17:10 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:10 localhost nova_compute[280168]: 2025-11-28 10:17:10.569 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:11 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:17:12 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:13 localhost nova_compute[280168]: 2025-11-28 10:17:13.188 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:14 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:15 localhost nova_compute[280168]: 2025-11-28 10:17:15.611 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:16 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:16 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:17:18 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:18 localhost nova_compute[280168]: 2025-11-28 10:17:18.224 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:20 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:20 localhost nova_compute[280168]: 2025-11-28 10:17:20.656 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:21 localhost ceph-mon[301134]: mon.np0005538515@2(peon).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104 Nov 28 05:17:22 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:23 localhost nova_compute[280168]: 2025-11-28 10:17:23.269 280172 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m Nov 28 05:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3. Nov 28 05:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9. Nov 28 05:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c. Nov 28 05:17:23 localhost systemd[1]: Started /usr/bin/podman healthcheck run d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7. Nov 28 05:17:24 localhost systemd[1]: tmp-crun.FeDlrK.mount: Deactivated successfully. Nov 28 05:17:24 localhost podman[326013]: 2025-11-28 10:17:24.003348968 +0000 UTC m=+0.093440260 container health_status d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter) Nov 28 05:17:24 localhost podman[326012]: 2025-11-28 10:17:23.971017271 +0000 UTC m=+0.067460778 container health_status b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3) Nov 28 05:17:24 localhost podman[326013]: 2025-11-28 10:17:24.03885624 +0000 UTC m=+0.128947572 container exec_died d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi , managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}) Nov 28 05:17:24 localhost systemd[1]: d28c0e8fd438b00dd8ab36649f1b3f02744bd33191d227364de3217fdcd8c3d7.service: Deactivated successfully. Nov 28 05:17:24 localhost podman[326012]: 2025-11-28 10:17:24.053557094 +0000 UTC m=+0.150000591 container exec_died b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible) Nov 28 05:17:24 localhost systemd[1]: b95b8b369ca0d04466faa10822d7769e9b6e55c24a5c9f479419e65851e1616c.service: Deactivated successfully. Nov 28 05:17:24 localhost podman[326011]: 2025-11-28 10:17:24.039952105 +0000 UTC m=+0.136794425 container health_status 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true) Nov 28 05:17:24 localhost podman[326010]: 2025-11-28 10:17:24.104462832 +0000 UTC m=+0.200666312 container health_status 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm) Nov 28 05:17:24 localhost ceph-mgr[286188]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 234 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail Nov 28 05:17:24 localhost podman[326011]: 2025-11-28 10:17:24.127614434 +0000 UTC m=+0.224456714 container exec_died 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true) Nov 28 05:17:24 localhost systemd[1]: 98976ce53429eeceb8d29e2e3bd0404e86bed3e45091f63795bb750b24b381a9.service: Deactivated successfully. Nov 28 05:17:24 localhost podman[326010]: 2025-11-28 10:17:24.144450403 +0000 UTC m=+0.240653893 container exec_died 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute) Nov 28 05:17:24 localhost systemd[1]: 783f7bdda3f71afff28110159f06f1a6805dbf850fe6e134f611291204eec4c3.service: Deactivated successfully. Nov 28 05:17:24 localhost sshd[326093]: main: sshd: ssh-rsa algorithm is disabled Nov 28 05:17:24 localhost systemd-logind[763]: New session 76 of user zuul. Nov 28 05:17:24 localhost systemd[1]: Started Session 76 of User zuul.